Skip Navigation
 
ACF
          
ACF Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News   |   HHS Home

  Questions?  |  Privacy  |  Site Index  |  Contact Us  |  Download Reader™  |  Print      

Low Income Home Energy Assistance Program assistance with heating and cooling costs

Managing For Results Primer

Prepared For:

LIHEAP Advisory Committee on Managing for Results
U.S. Department of Health and Human Services
Administration for Children and Families
Division of Energy Assistance, Office of Community Services

June 1999

Table of Contents

I. Overview

II. What are the Steps of Managing for Results Process?

Step 1. Deciding What Measurable Results You are Trying to Achieve

Step 2. Identifying Specific Measures to Track and Assess Programs

Step 3. Collecting and Analyzing Data

Step 4. Using Findings to Improve Program Performance

Step 5. Constantly Improving the Process Itself

III. Conclusion

Bibliography

 

This document has been prepared for the Division of Energy Assistance's LIHEAP Advisory Committee on Managing for Results by Hap Hadd, private consultant, under subcontract to the National Energy Assistance Directors' Association (NEADA) through ACF Purchase Order No. 980163.

The Administration for Children and Families (ACF) established the LIHEAP Advisory Committee on Managing for Results in October 1997 as a joint partnership between the states, local agencies, other program stakeholders and ACF. The Advisory Committee's task is to collaborate with ACF on developing recommendations on cost-effective performance goals and measures for LIHEAP that will meet the requirements of the Government Performance and Results Act (GPRA) of 1993. In addition, the Advisory Committee's task is to enhance program management practices through the approach known as "Managing for Results." ACF has awarded NEADA small purchase orders to support the work of the Advisory Committee

The views and opinions of the author or the LIHEAP Advisory Committee on Managing for Results expressed herein do not necessarily state or reflect those of the U.S. Department of Health and Human Services' Administration for Children and Families.

Copies of this document can be obtained by contacting NEADA at the following address:

National Energy Assistance Directors' Association
P.O. Box 42655
Washington, D.C. 20015-0655
202-237-5199


OVERVIEW

The LIHEAP Advisory Committee on Managing for Results believes that LIHEAP grantees should focus on achieving clearly stated measurable results for our customers. LIHEAP grantees should plan for and measure the results of their work and strive to produce improved services and benefits for people. A well-run managing-for-results process would help to achieve this goal.

Historically, programs have tracked their resources (inputs) and products (outputs) and focused on efforts to increase their internal efficiency. More recently, the public and government at every level are asking that programs state expected results more clearly and report progress in achieving these results, particularly benefits for the people they are serving. How have the lives of clients changed for the better? To compete successfully for resources, programs must be able to demonstrate that they are delivering these types of results for their cost. This emphasis has led to a focus on the process of managing-for-results.

The major purpose of this simple "How To" primer is to demystify the managing-for-results process. Many of you will discover that you are already operating some form of such a process. This presentation focuses on:

  • Providing an overview of the process,
  • Describing briefly its major steps - there is no one "right" approach but there are some essential steps; and
  • Pointing grantees to more detailed manuals and materials.

There are numerous sources of information on this subject. To paint this very broad-brush picture, we have borrowed the majority of our ideas from three related documents:

  • Measuring Program Outcomes published by the United Way of America,
  • Evaluating Programs published by the Wisconsin Division of Housing, and
  • Best Practices in Performance Measurement, a benchmarking study conducted by the Office of the Vice President's National Performance Review.

Once you feel comfortable with the basics, we refer you to these and other potentially useful documents listed in the attachment, which should help you further along in developing a strong managing-for-results process.

A couple of comments on terminology before getting to the basics. There are a number of terms associated with managing for results and practitioners don't always use the same set of definitions. Therefore, it is important that people working in the same process talk the same language and use an agreed upon set of definitions, whatever they are. At the same time, there should not be so much concern about definitions that implementation of the process is stopped in its tracks. Agree to use definitions that are clear and generally accepted and move on. We have tried to minimize this "issue" by using only a few definitions.

WHAT ARE THE STEPS OF A MANAGING-FOR-RESULTS PROCESS?

 

As noted above, managing-for-results is planning for and measuring the results of operations and striving to produce improved services and benefits for people. There is no single design for a managing-for-results process. However, there is general agreement that such a process should include certain key elements or steps. Specifically:

  • Step 1 - Deciding what measurable results you are trying to achieve,
  • Step 2 - Identifying specific measures to track and assess progress,
  • Step 3 - Collecting and analyzing data,
  • Step 4 - Using findings to improve program performance, and
  • Step 5 - Constantly improving the process itself.

Given these elements, where is the mystery? These are activities that many of you already perform to varying degrees. However, although these appear to be simple steps, it takes discipline and hard work to carry out each well. Also, what may be lacking in many current situations includes: 1) tying these elements together in a comprehensive process, 2) ensuring that the focus is primarily on results for clients, and/or 3) making full use of results to improve performance.

To be successful, you need to address the steps of the process in the order listed above and need to be as specific as possible at each stage. Fuzzy thinking in determining what you are tying to achieve will lead to difficulty in identifying appropriate measures which in turn will complicate data collection, analysis and taking corrective actions.

As a process matures, you should ensure there are feedback loops among the steps (e.g., you will need to revise your stated measurable results as they are achieved; an ability to collect new or different data may change the measures you use to report progress).

Step 1 - Deciding What Measurable Results You Are Trying to Achieve

Developing clear and concise result statements is the essential first step in the process. These statements need to describe specifically what you are trying to achieve. They need to be thought of as a communications tool for both internal and external use - to help focus your operations and to sell your program.

A workgroup with members from across the program is a good approach for managing this step. As a prelude to developing result statements, it is important for the group to gather background information and feedback from various sources such as:

  • Agency and program materials (e.g., authorizing legislation, mission, strategic plan) and documents from other organizations providing similar services,
  • Program staff, volunteers, governing board and relevant committees,
  • Current and past clients, and
  • Representatives from agencies that are now providing services to past clients.

With this information, the group could use a variety of techniques (e.g., brainstorming, the logic model described below) to develop a preliminary set of result statements. The various techniques and sources will frequently produce a large number of possible result statements. A major responsibility for the workgroup will be to narrow down, through consensus, the set of possible statements to a manageable number of key priorities.

If an organization has already developed a strategic plan, the plan's broad, long-range goals provide the best basis for developing the more near-term results that a managing-for-results process should track. The strategic plan identifies the organization's highest priorities and the near-term results should flow from the strategic goals.

Whatever approach you use to develop your set of results, we urge you to think of them as on a continuum composed of different types of results statements. There could be: 1) process or capacity building results (e.g., initiating a new educational program), 2) output results (e.g., providing products and services to clients, and 3) outcome results (e.g., changes in clients' skills, behaviors and/or conditions). Process results support achieving output results; output results support achieving outcome results.

One technique some successful programs have used for identifying results along the continuum is called the logic model. The model is used to identify "logical relationships" between the results of a program activity and an outcome (a result stated in terms of clients). By providing a particular service, you assume you will influence a particular positive outcome for your clients. In using the model, you should not be deterred if you can't clearly identify "cause and effect" relationships (i.e., a specific activity has been "proven" to lead directly to a specific outcome). That's why it is called a logic model; you are assuming there is a logical, although unproven, relationship. You do need to describe what you believe the relationships are and over time begin to test your relational assumptions. Usually, many activities outside of our control will also influence particular outcomes.

In one approach for using the model, you first identify the key activities you are performing and then ask the simple question "why" - why are we doing these activities? To what end? By continually asking and answering the "why" question, you should be able to identify results along the continuum that eventually focus on benefits for clients. The continuum could have three types of outcome results:

  • Initial outcomes - changes in clients' knowledge, attitudes, and skills
  • Intermediate outcomes - changes in clients' behavior resulting from new knowledge, attitudes or skills
  • Longer-term outcomes - meaningful changes in clients' condition or status

Keep this in mind - If a program cannot link its activities to one or more outcomes, then why does it exist? Use the logic model or some other technique to link program activities to various types of outcomes.

Programs that have developed a strategic plan with a broad set of long-range goals can also use the logic model to identify results along the continuum. By asking the question "how" - how are we going to achieve these long range goals -you can begin to identify the strategies and the near-term results you need to achieve to eventually accomplish the strategic goals. Again, you use the model, but in reverse, to identify "logical relationships" between outcome results and the results of a program activity.

In using the logic model, movement forward and back along the continuum of results comes from asking and answering the "why" and "how" questions.

Logic Model

Continuum of Results

Process/
Capacity Building

Output

Initial
Outcome

Intermediate
Outcome

Long-term
Outcome

WHY?Þ

Ü HOW?

 

There are a number of characteristics that each result statement should have. The first two below occur from using the KISS approach - keeping it short and simple - which we highly recommend as the starting point for your effort. Characteristics of a good result statement include:

  • Short - 8 to 10 words max, if possible.
  • Simple - One idea per result statement. List related results as separate statements. Remember, you are trying to communicate clearly; don't complicate and confuse people by combining ideas. Also, don't include strategies/means for achieving the result within the statement itself; this information is more appropriate for a detailed plan. Statements should concentrate on "what" not "how."
  • Outcome-oriented - Wherever possible, state results in terms of the people you are trying to assist; an outcome result. However, early in this effort you frequently will need to state results in terms of process, capacity building and/or outputs. This is fine but your goal should be to move toward having outcome-oriented result statements.
  • Feasible - State an accomplishment that is appropriate to the level of funding, capacities of the program, and attainable within a 1 to 2-year timeframe.

  • Priority - Represents a result that is important to measure.

  • Relational - Logically related to program activities.

One final point in this step. There are many possible formats for result statements. A clear, concise description of what you expect to achieve is paramount. A simple format we have found useful contains just four parts --Change Target What for Whom.

Change is a verb that shows direction (e.g., increase, decrease),
Target is a level of performance (e.g., 50%),
What is the subject (e.g., participation, awareness); and
Who is the object (e.g., elderly, children).

Frequently, results statements include a time dimension whereby something is to be accomplished with a particular timeframe, e.g., a year.

Here is an example of using the logic model to develop result statements along the continuum.

Output Result - Increase by W the number of clients completing energy efficiency education program
WHY are we trying to educate more clients?

Initial Outcome - Increase by X% clients' understanding of using energy efficiently
WHY are we trying to increase client understanding?

Longer-term Outcome - Reduce by Y% energy expenditures in client households.


Step 2 - Identifying Specific Measures to Track and Assess Progress

As you begin this second step in the process, it is important to recognize that developing good measures is just as much an art as it is a science. You need to put a reasonable amount of thought, time and effort into developing your first set of measures. However, realize that only in working with the measures will you determine if they are truly useful. Fully expect to eliminate or modify a significant percentage of your original measures, although measures should stabilize over time. This is a learning process and experimentation should be encouraged.

Having opened with this observation, there are some "rules" that will help in developing a good initial set of measures:

  • Measures need to "flow from" result statements. There should be a clearly understood relationship between the measure and the result statement. The measure should identify the specific observable accomplishment or change that will tell if the result has been achieved. Well-developed result statements frequently will be self-measuring; measures are quite straightforward and obvious.
  • Measures must be observable and measurable. Measurable does not necessarily mean numerical because a legitimate result could be something that happens or doesn't't happen (e.g., a facility becomes operational by a specific date). However, numerical measures are preferred and are extremely useful because they provide information on incremental progress.
  • Every result statement must have at least one measure. If you can't measure it, you can't manage it! At the same time, measures should capture all aspects of the result statement and therefore it may be appropriate for a result statement to have more than one measure.
  • Measures (and result statements themselves) should not be ambiguous. Such terms as "substantial," and "adequate" are not sufficiently specific and therefore are subject to interpretation and confusion. Avoid them.

There are situations when it may be difficult to measure outcomes directly (e.g., clients are anonymous, assistance is short-term, major changes are not expected for many years). In these cases, "proxy" measures are acceptable. These are measures, normally of an activity, for which a rational connection can be made to a desired but non-measurable outcome (a la the logic model). When using a proxy measure, you need to provide the thinking behind using the measure and be prepared to discuss and defend its use. For example, the LIHEAP Committee on Managing for Results has focused on the targeting of vulnerable households (i.e., a household with at least one member who is either elderly, disabled, or a young child) as a proxy performance measure for LIHEAP. Targeting in this context means reaching a higher number of vulnerable eligible households and providing them with a higher level of LIHEAP benefits than non-vulnerable eligible households. It is assumed that such targeting will safeguard the health of households with members who are at the highest health risk to the effects of unsafe temperatures in their homes during the winter or summer.

Using our earlier examples of result statements, here are examples of measures that "flow":

Result Statement

Possible Measures

Increase the number of clients completing energy efficiency Ed program

Number of clients (very straightforward)

Increase by X% clients' understanding of using energy efficiently

% increase in clients' knowledge of energy efficiency based on scores from pre and post-tests

Reduce by Y% energy expenditures in participant households

Annual household energy expenditures before and after education program

Step 3 - Collecting and Analyzing Data

Determining what data to collect should begin with precisely defining all the key terms of each result statement and associated measures. Defining key terms reduces vagueness and helps to identify the needed data, potential data sources and data collection methods (e.g., program records, questionnaires, and interviews).

Data collection is a process unto itself. United Way's Measuring Program Outcomes has a number of helpful sections devoted to this subject. As pointed out in that document, you should consider several factors in reviewing data collection approaches - cost, amount of training required for data collectors, completion time and response rate.

It is important to remember that data from measures tells you what happened but not why. And, it is essential to determine the "why" in order to decide what actions to take to achieve improvement. After all the front-end effort to develop a managing-for-results process, data analysis can be the most difficult, but also most rewarding step in the process. At this stage, you are comparing results to expectations. When things don't work out exactly as planned, which is frequently the case, trying to figure out why and what to do about it presents you with difficult puzzles and exciting possibilities.

Data analysis frequently begins by breaking down data by client and program characteristics that could influence outcomes (e.g., program location, different sex, and racial or ethnic groups). With these types of breakdowns, you can do comparative data analyses such as:

  • Actual results vs. targets included in the result statements
  • Different strategies used to achieve results
  • Different service providers

An example of an analytical approach is the targeting index methodology, which has been specifically developed for the LIHEAP program. See "Performance Measurement of LIHEAP Targeting" in the LIHEAP Home Energy Notebook for Fiscal Year 1996 that is available from the U.S. Office of Community Services' Division of Energy Assistance.

Data analysis also examines external factors beyond the control of the program that may influence the level of performance. For example, LIHEAP program participation can be affected by the severity of the weather, fluctuations in home energy fuel costs, the economy, and the impact of utility restructuring on low-income households.

The bottom-line for this step is the need to focus attention on determining the probable reasons why certain results occurred. You need this type of understanding to make appropriate changes in operations and to explain the reasons for particular results to funders and the public.

Step 4 - Using Findings to Improve Program Performance

This is the payoff for the process. Having determined what happened and why, you now need to decide what actions - changes in procedures, changes in strategies, new collaborations, etc. - to initiate to make a good result even better or reverse poor performance.

You use findings to both improve and promote the program. You should use information on how well the program is doing and why to:

  • Identify program improvement needs and effective strategies,
  • Identify unrealistic result statements that need to be revised,
  • Support follow-on planning,
  • Provide direction to staff,
  • Identify training needs, and
  • Guide budget formulation and resource allocation.

You can also use the findings externally to:

  • Promote the program to potential clients and referral sources,
  • Identify partners for collaboration,
  • Retain/increase funding,
  • Recruit staff and volunteers, and
  • Enhance the program's public image.

Step 5 - Constantly Improving the Process Itself

The last step. Programs should never be satisfied with their managing-for-results process.

In initiating a process, rather than spending an inordinate amount of time up-front trying to develop the perfect process, you will probably benefit more from using and fine-tuning an imperfect process (Just Do IT). Remember, there is no right way and while we recommend that each process address the key steps outlined in this primer, the process you eventually develop will take on features that people using it find most beneficial. Make necessary adjustments as you gain experience so your results and measures are accurate, reliable and valid.

In addition, once you are up and running, you should constantly monitor and review all steps and elements of even a mature process; frequently asking such questions as:

  • Does each result statement clearly describe what we are trying to achieve? Does it still represent a program priority?
  • Is each measure providing useful information? Do any measures duplicate one another? Are there measures we need to revise or eliminate completely?
  • Are we collecting needed data as efficiently and as timely as possible?
  • Are we developing information that is useful for management, the operations staff, and the public?

CONCLUSION

The managing-for-results process that we have briefly described in many ways is fairly simple. However, you will soon find out that progressing through the various steps requires a lot of hard work, discipline, cooperation and a shared commitment to full implementation of the process. The rewards, though, are well worth the effort. We guarantee that a successfully operated managing-for-results process will produce improved benefits for the clients we are here to serve and provide a sense of satisfaction of a job well done.

CHARGE!



BIBLIOGRAPHY

SOURCE DOCUMENTS

Measuring Program Outcomes published by the United Way of America, 1996. Available from Sales Service/America: (800) 772-0008, Item number 0989. Cost - $5 per copy plus shipping and handling.

Evaluating Programs published by the Wisconsin Division of Housing, 1998. Available on Wisconsin's LIHEAP web site.

Best Practices in Performance Measurement, 1997.

OTHER SELECTED PUBLICATIONS AND WEB SITES

Reaching Public Goals: Managing Government for Results - Resource Guide, October 1996. Published by the National Performance Review. Lists numerous sources and organizations involved in and with knowledge of managing for results.

U.S. Administration for Children and Families' LIHEAP web site. This site includes a selection on performance measurement that contains information on the Government Performance and Results Act (GPRA), LIHEAP Model Performance Goals and Measures, LIHEAP GPRA Plan, and activities of the LIHEAP Advisory Committee on Managing for Results

U.S. General Accounting Office. First copy of reports are free; $2.00 for each additional copy. Reports are available at the GAO web site, www.gao.gov

Managing for Results: Critical Actions for Measuring Performance. Washington, DC: GAO/T-GGD-AIMD-95-187. June 20, 1995.

Executive Guide: Effectively Implementing the Government Performance and Results Act. Washington, DC: GAO/GGD-96-118. June 1996.

Managing for Results: An Agenda to Improve the Usefulness of Agencies' Annual Performance Plans. Washington DC: GAO/GGD/AIMD-98-228. September 1998.

Managing for Results: Agencies' Annual Performance Plans Can Help Address Strategic Planning Challenges. Washington DC: GAO/GGD-98-44. January 1998.