Institute of Museum and Library Services
site search 
Home    Press Room    Related Links    FOIA    RSS    Contact Us
Grant Applicants Grant Reviewers Grant Recipients Library Statistics State Programs Resources News & Events About Us

Available Grants –
by Grant Name

Available Grants –
by Institution Type

Available Grants –
by Project Type

Eligibility Criteria

Sample Applications

Submitting an Application

After You Apply

Outcome Based Evaluation

•  Basics
•  Purposes
•  Webography
•  Presentations
• 
Grant Applicants - Outcome Based Evaluation

Resources & Reading

Frequently Asked Questions

The Institute of Museum and Library Services (IMLS) is a federal agency that fosters leadership, innovation, and a lifetime of learning through grants to museums and libraries. Please see IMLS's Web site at http://www.imls.gov for additional information about IMLS and its activities and grant programs. Since 1997 IMLS has been committed to helping libraries and museums strengthen their programs and their capacity to evaluate the impact of their work through systematic evaluation of results - outcomes. Some of the questions libraries and museums of all sizes and types have asked about outcome-based evaluation are answered below. IMLS defines outcomes as they pertain to its own grant programs and its typical grantees, and does not claim or intend to speak beyond its own interests and community. At the same time, we are aware that there is a broad and growing trend for accountability in the form of outcomes-based reporting to government at all levels, to foundations, and to donors. IMLS's perspective is evolving in response to experience in the field, but we hope the following responses clarify our current perspective.

What is outcome-based evaluation (OBE)?
Outcome-based evaluation, sometimes called outcomes measurement, is a systemic way to determine if a program has achieved its goals. The organized process of developing an outcome-based program and a logic model (an evaluation plan) helps institutions articulate and establish clear program benefits (outcomes), identify ways to measure those program benefits (indicators), clarify the specific individuals or groups for which the program's benefits are intended (target audience), and design program services to reach that audience and achieve the desired results.

What is an "outcome" and how do you evaluate (measure) them?
An outcome is a benefit that occurs to participants of a program; when the benefits to many individuals are viewed together, they show the program's impact. Typically, outcomes represent an achievement or a change in behavior, skills, knowledge, attitude, status or life condition of participants related to participation in a program. In OBE, an outcome always focuses on what participants will say, think, know, feel, or be-not on mechanisms or processes which programs use to create their hoped-for results. Well-designed programs usually choose outcomes that participants would recognize as benefits to themselves. To simplify planning for evaluation, state the outcome you want to produce in simple, concrete, active terms.

Poor Outcome Statements
•  Students will know how to use the Web
•  Patrons will use the automated ILL system
•  Users will have better health information
•  Library staff will be trained in reference skills
•  Democracy will flourish

Better Outcome Statements
•  Students will demonstrate information literacy skills
•  Patrons will report high satisfaction with the automated ILL service
•  Patrons will make healthier life-style choices
•  Library staff will provide faster, more accurate, and more complete answers to reference questions
•  Visitors will register to vote

What is the difference between outputs and outcomes?
Outputs are measures of the volume of a program's activity: products created or delivered, people served, activities and services carried out. Think of outputs as the "things" piece of evaluation. Outputs are almost always numbers: the number of loans, the number of ILLs, the number of attendees, the number of publications, the number of grants made, or the number of times a workshop was presented. Outcomes are the "people" or the "so what" piece - what happened because of the outputs..

Outputs
•  42 staff members will complete training
•  37 libraries will participate in reference training
•  4 workshops will be held
•  participants will receive 3 CEUs
Outcomes
•  Library staff will provide faster, more accurate, and more complete answers to reference questions
•  Customers will report high satisfaction with reference service

How do I choose outcomes for my program?
First, carefully think out and describe the purpose of the program. A program is not usually developed only to carry out various actions or tasks. There is a reason for undertaking the tasks and offering the services. Most modern museums and libraries don't build collections only to own them, or to go through the processes of cataloging, storing, and maintaining them. They develop collections to support the need of existing or anticipated users for information and education.

Ask, "why are we offering this program, what do we want to accomplish, and who do we want to benefit?" It may be helpful to ask program staff, program partners, and other stakeholders, "if we are really successful with this program, what will the results look like for the people we served?" Equally important is knowing your audience, their needs and wants, and what your program can do to help them achieve their aims.

The answers to those questions should allow you to describe the changes or impact that you want to see as a result of the program. Those hoped-for changes become the intended program outcomes.

What is an indicator?
Indicators are the specific, observable, and measurable characteristics, actions, or conditions that tell a program whether a desired achievement or change has happened. To measure outcomes accurately, indicators must be concrete, well-defined, and observable; usually they are also countable.

Poor Indicators
•  The # and % of students who know how to use the Web
•  Patrons will report high satisfaction with the automated ILL service
•  Users will make healthier choices

Better Indicators
•  The # and % of participating students who can bring up an Internet search engine, enter a topic in the search function, and bring up one example of the information being sought within 15 minutes
•  The # and % of patrons who say they are "satisfied" or "very satisfied" with the automated ILL service after using the service and who use the service more than once a month for six monthse
•  The # and % of users who report they made one or more life-style changes from a list of 10 key life-style health factors in the last six months

It is easy to construct a good indicator if you use the format:

Number and/or percent of a specific target population who report, demonstrate, exhibit an attitude, skill, knowledge, behavior, status, or life condition in a specified quantity in a specified timeframe and/or circumstance

•  Number and percent: Both number and percent are usually specified to provide adequate information. If only two people participate in your program, after all, reporting that 50% of them benefited could be misleading. Examples: 30% of 150, 75% of 25, 10% of 1,500.

•  Target audience: The group of people the program hopes to affect. Effective programs keep the characteristics of the people they want to benefit clearly in mind. The more narrowly and specifically the group of people who are expected to participate in a program can be described, the greater the likelihood that a program will be designed to actually reach them. Examples (low to high definition): Maryland residents, Baltimore high-school students, Howard County mothers at literacy level 1 or below.
•  Report, demonstrate, exhibit: Note that all of these are active, observable behaviors or characteristics that don't depend on guesswork or interpretation.

Attitude What someone feels or thinks about something; e.g., to like, to be satisfied, to value..
Skill What someone can do; e.g., log on to a computer, format a word processed document, read..
Knowledge What someone knows; e.g., the symptoms of diabetes, the state capitals, how to use a dictionary..
Behavior How someone acts; e.g., listens to others in a group, reads to children, votes..
Status Someone's social or professional condition; e.g., registered voter, high-school graduate, employed..
Life condition  Someone's physical condition; e.g., non-smoker, overweight, cancer-free..
•  Specified quantity and specified timeframe or circumstance: This is the measurable part of an indicator. It asks the program developer to choose a quantity of achievement or change that is enough to show the desired result happened, and the circumstances or timeframe in which the result will be demonstrated. Examples: three times per week, in 15 minutes or less, 6 months after the program ends, 4 or higher on a 5-point scale.

What kinds of programs are best suited for OBE?
Most programs can incorporate OBE as an effective and efficient management tool. Specifically, OBE is geared toward measuring the impact of a program on a specific group of people known as a target audience. Any program that intends to educate or train participants (to change or build attitudes, skills, knowledge, behavior, status, or life condition) can be designed with outcomes at its core and can be evaluated using OBE concepts.

For example, among the State Library Programs there are many examples of state-wide professional training initiatives. In Texas, for instance, regional providers offer technology training to help library staff keep their computers running and online. Texas could evaluate the success of that program by looking for evidence that librarians who complete that training can solve basic, frequently-experienced computer problems such as a frozen screen - that would be a desired outcome that can be measured. IMLS provides LSTA funds for technology infrastructure because lawmakers assume that technology is needed for better support of both users and the staff who provide services. The State Library Administrative Agencies could know if they were meeting those goals by looking for outcomes related to technology training or use of expanded resources or information.

In another example, from IMLS's CAP program. CAP provides information to help museums set priorities and address the preservation and conservation needs of their collections. The information, or the report that the CAP consultant provides, is not the purpose of CAP. The purpose (desired outcome) is changed knowledge and behavior on the part of museum staff - we hope that they in turn will improve collections maintenance practice and create a formal, prioritized management plan to address collection needs. Here, too, we assume that in the long term there will be a benefit to end audiences in improved or expanded exhibits and programs built on the collections that are protected by the staff's improved actions. CAP recipients could be asked to report to what extent they've achieved those goals to tell IMLS if CAP has realized its intended outcomes.

Most National Leadership Discretionary Grants include the intention to provide a model for other institutions. We envision that the research that they carry out will be used by others. It is possible for grantees to evaluate the extent to which they have successfully in communicated their model by asking their target audiences (usually library or museum professionals and educators) whether they know key concepts from the research, and/or how they have used or intend to use the results of the research.

Of course there are projects for which OBE is not applicable. We encourage museums and libraries to talk to IMLS staff if they are uncertain if OBE can be useful for their proposed or funded project.

Do I have to evaluate every program my institution offers?
No. We believe IMLS constituents will come to know the benefit of OBE and will want to incorporate it in many, if not all, programs, particularly those that have a clear audience to whom a program is targeted. We're urging library and museum staff to choose one program that they offer, and to "pilot-test" OBE with that program. That will provide the experience to decide what skills and resources an institution needs to develop to demonstrate and report outcomes to its stakeholders.

How many program participants have to be evaluated, all or a sample?
For many programs it is possible to evaluate the impact to all participants. Others will have access to only a sample of participants. This is often true, for example, of programs to provide digital resources - collections, exhibits, curriculum tools, or Web sites. Many programs will seek volunteers to answer questionnaires or to participate in focus groups to provide outcome information. This is perfectly acceptable.

Will funders pay for small outcomes?
For IMLS it is less about small or large outcomes than about what you hoped to achieve for an audience, what you learned in the process and what was reasonable to expect for that audience. In some cases a 10% improvement is very significant, while in others, a 90% impact is reasonable to expect. You need to know your audience and your stakeholders and creating appropriate goals and expectations. When that is done, and outcomes still fall short of goals, OBE allows institutions to assess, explain, and learn from why outcomes fell short of goals. Without OBE, it can appear as if a program just didn't do what it said it would. With OBE, you have the opportunity to learn why and make improvements for the next offering, or the next program.

IMLS turns to its reviewers to decide what projects seem most promising and most needed. If a proposed project can show a clear plan for evaluation that will demonstrate meaningful outcomes (even small ones) concretely and objectively, we believe reviewers will find it very competitive.

Finally, the "size" of the outcome is proportionate to the size of the target audience and the duration or the intensity of their experience in a program. If a project works closely with a small number of participants, the outcome might look small, but might be profound for those participants. If a project offers a rapid service to a very large number of participants or users, the outcome is likely to be minor, but may reach many people. Reviewer's assessments of a proposal consider those factors.

Many proposals make idealized claims for anticipated contributions without offering any concrete information about how project managers will know if their intentions were realized. Some favorite examples include: " if this project is funded, democracy will flourish," and "if other states followed our model they would find it very productive." It is increasingly important to resource allocators and policy makers that programs or projects have concrete audience benefits, with services designed to achieve them for a clearly-defined audience, and that managers demonstrate that the benefits were achieved

Can my program take credit for large outcomes?
Certainly, if the outcomes were logical and closely related results of the services provided. "Attribution" is less concerned with big or small outcomes and more concerned with the logical connection between services and outcomes and the clarity of indicators. Part of the usefulness of OBE is the concrete, objective way it can connect participation in a program or service to specific knowledge, attitudes, behaviors, skills, and other achievements.

What does OBE cost?
On average, an institution should budget 7-10% of a program's total budget to cover the costs of OBE. Almost all funders require some evaluation, and many request substantive evaluation. As a result funders expect that evaluation costs will be included in the budget for any project. Remember that in the case of State Program grants, LSTA funds may be used for evaluation. In the case of IMLS discretionary grants, staff time, and other resources required for evaluation can be used to match funds awarded directly, or funds can be requested for evaluation. The exact cost will depend on the project, the level of evaluation and the knowledge of the organization.

If OBE is not the same as academic research, and the results may not be reliable evidence of outcomes, so why should I do it?
Formal research is one way of capturing information, not the only way. OBE is a strong, effective and reliable management tool that provides an institution with information regarding the degree to which a program did what it set out to do. While it does not allow you to determine and claim unique or complete credit for an outcome, it does allow you to demonstrate the degree to which a program contributed to the outcome for individuals. If you have no information, you cannot credibly claim any contribution to impact.

This is a burning question for many in the library and museum worlds, in part because academic training conditions us to look skeptically at any information that is not statistically valid, rigorous in its sample selection, and otherwise derived from the scientific model. In OBE, we're not looking for information we intend to extend to other institutions or contexts. Instead, we're looking to see if what we did had the result we intended. That information helps us make decisions about a particular program: whether to continue it, expand it, improve it, or replace it with another.

OBE normally looks at an individual program's participants for logical, credible evidence that a limited number of very specific, observable attributes or phenomena happened in relatively close proximity to an experience or service designed to produce them for those particular people.

OBE doesn't usually look for signs that participants have more or better of what it's evaluating than non-participants. It is not intended to prove that one program did something more effectively than another (although that's possible).

If a project intends to show unique attribution, to demonstrate the relative worth of one approach measured against others, or to provide a tool for use by other organizations, then of course it needs to turn to the tools and criteria of research. Since the use of the data provided by OBE is limited, we can usually be satisfied if information that is accurate, without requiring statistical rigor, blind or random sampling, or other characteristics of research for which broad applications are intended.

What do I look for in an evaluator?
Someone who has a strong working knowledge of outcome-based evaluation - measuring impact on the people served by a program - and also has knowledge and experience working with your discipline. A good evaluator can quickly assess and learn your specific programs and mission. It helps, but is not a requirement, that they have experience evaluating similar projects.

How many outcomes should my program have?
A program needs to have at least one outcome, however, programs are likely to have more than one outcome. It is important to consider what the purpose of the program is and the ways you would expect participants to benefit from your program. These benefits will likely be the outcomes for your program, but you need not measure everything. You may want to prioritize this list and determine what you and your program's stakeholders would really need to know about the program's impact.

What is a logic model and is it necessary?
A logic model is a step-by-step approach for defining and measuring outcomes. It is your program's evaluation plan. It shows how you will measure outcomes, what information you need to collect, who you will collect information about, when you will get the information and what targets you have chosen for the outcomes.

Yes, a logic model is essential to the success of your institution's implementation of outcome-based evaluation. Without this, outcome based evaluation will not become a reality for your institution.

Logic Model Elements and Structure

Outcome Definitions:
•  Intended Impact
Examples:
•  Students will have basic Internet skills
Indicator •  Observable and measurable behaviors and condition •  The # and % of participating students who can bring up an Internet search engine, enter a topic in the search function, and bring up one example of the information being sought within 15 minutes
Data source •  Sources of information about condi-tions being measured •  Searching exercise, trainer observation
Applied to •  The specific group within an audience to be measured (all or a subset) •  Howard County 7th-8th graders who complete the workshop
Data interval •  When data will be collected •  At end of workshop
Target (Goal) •  The amount of impact desired •  85% of approximately 125 partici-pants

How complicated is outcome based evaluation?
Once the concepts are understood and you have successfully implemented it a few times, it is a very simple process to understand and manage. The key to success is commitment of the institution and the clear identification of roles in managing OBE.

How much time will it take?
It isn't possible to prescribe a time for all programs. It does take a commitment of time and resources to get it done. The majority of time comes at the front end, particularly as you first begin to implement outcome-based evaluation in your institution. In compensation, once incorporated, OBE can save significant time in planning and management by allowing you to get at the right questions, and answers, early on in the program planning process.

What can outcome based evaluation do for my institution?
Employing outcome-based evaluation and reporting on the impact of the program can have many positive benefits for an organization:

•  First, it can help institutions tell their story in ways their stakeholders and the general public can understand and appreciate. It helps institutions convey important information about the collective impact on their program participants, while maintaining the ability to convey the very powerful and personal stories that show how important the program was to specific individuals.
•  Second, it can help better position institutions to request and receive funding because they can describe the intended benefits and impact of a proposed program in very specific terms by identifying what the program will do for participants. This is particularly important given that more and more funders expect programs to identify what they hope to achieve as a result of funding.
•  Third, when OBE becomes part of an organization's management routine, its programs can be improved as a result. Program goals are well planned and established, and these goals are regularly reviewed. Stakeholders are informed about the impact of funded programs. In turn, outcome-based evaluation will helps an organization's program staff better communicate the benefits they intend to deliver to program participants - it can aid recruitment and marketing.

Aren't some things difficult to measure?
Some things will seem more difficult to measure (evaluate) than others, and not all things programs accomplish need be measured. It is often more straightforward to measure "hard" impact, such as knowledge, behavior, and skills than it is to measure "soft" impact such as attitudes. Measuring attitude changes or other "soft" impacts is not actually more difficult, but it may require more creativity. Regardless, clarifying the relationship between an outcome and measurable and observable "indicators" is key to success.

How will I know if my outcomes are good enough?
Outcomes are effective if they 1) are closely associated with the purpose of a program and describe what an organization wants to make happen for people, 2) are realistic and within the scope of what the program can affect and 3) have indicators that allow them to be measured.

How do I report outcome based evaluation information?
Consider what your program's stakeholders want to know about the results of your program when developing reports from outcome-based evaluation data. The institution's Board, its community, and funders may want similar information, but this does not mean that one report will satisfy everyone. In general, consider the following as desirable information for reports:

•  Needs identified
•  Inputs (what we used)
•  Activities and services (what we did)
•  Audience (characteristics and participation)
•  Outputs (what we produced)
•  Outcomes (what impact we achieved and how we know) and
•  Interpretation (what it all means, why it matters)

Do I have to do this?
IMLS does not currently require its grantees to conduct outcome-based evaluation, but it supports and encourages it as a valuable management tool. At the same time, IMLS is required to report to Congress in outcome-based terms; we cannot do that without input from you. We consider the consistent use of outcome-based evaluation to be an effective and efficient way for all programs to capture critically important information and to tell their story persuasively. IMLS is gradually strengthening information about outcome-based evaluation in guidelines for its discretionary grant programs and its program for State Library Agencies, and is considering the benefit of making outcome-based evaluation for funded programs a requirement at some future time.

Where can I get more information?
Current IMLS grantees can contact their program officer to discuss the specifics of their IMLS grant program and its evaluation. Institutions considering a proposal to IMLS can contact the grant officer for that program.

IMLS offers an online course, Shaping Outcomes, that can be done as a no cost, instructor-led, self-paced online tutorial or as a distance learning course. See IMLS's bibliography for OBE for a variety of helpful manuals and other resources, many of which are available at no cost online. Other organizations offer assistance in the context of their grants. Among the most readily available come from Project Star of the National Core for Service, the Kellogg Foundation, and the United Way of America. All are referenced in the IMLS OBE bibliography and both have many regional and local offices.

K. Motylewski/C. Horn 2/8/02

back to Index

back to top
 
 
Grant Applicants   Grant Reviewers   Grant Recipients   Library Statistics   State Programs
Resources   News & Events   About Us   National Initiatives   Grant Search   Press Room
Related Links   Contact Us   Privacy Policy   FOIA   Get Plug-Ins