You are here

A Primer on EM&V, Data Collection, Tracking and Reporting of Efficiency Savings, and Supporting Available Tools for EECBG and SEP Grantees (Text Version)

Ed Londergan: Hello, everyone. Thank you for joining us this afternoon. My name is Ed Londergan, I'm the technical assistance network coordinator at Northeast Energy Efficiency Partnerships, which is based in Massachusetts.

With me today is Julie Michals, director of Northeast Energy Efficiency Partnerships' Evaluation, Measurement and Verification forum.

Mark Stetz, a consulting engineer specializing in energy efficiency in measurement and verification. Mark conducts energy audits, has contributed to the Federal Energy Management Plan in international performance measurement and verification protocols, and offers training at the American Society of Heating, Refrigerating and Air Conditioning Engineers, the United States Green Building Council and Conferences and other venues.

Also with us is Phillip Sieper, an associate with the Cadmus Group, who is currently handling day-to-day management of the New York State Energy Research Development Authority, American Recovery Reinvestment Act evaluation activities. He conducts quantitative and qualitative data analysis for a broad range of projects including program evaluations and market characterization studies.

Everybody on today's call, except for the presenters, will be on mute for the duration of the webinar. If you look at the bottom of your "Go to Meeting" dialogue box, you should see an area where you can type in questions, which will let us know if you're having technical difficulties. We'll hold questions until the end. When you exit the Webinar today, you'll receive a survey, which we strongly encourage you to fill out. Your feedback is extremely important, and help us plan for future sessions.

But before we jump into today's presentation, I'd like to take a few minutes to describe the DOE Technical Assistance Program a little further. TAP is managed by a team in DOE's Weatherization and Intergovernmental Program, Office of Energy Efficiency and Renewable Energy. The TAP program provides state, local and tribal officials the tools and resources needed to implement successful and sustainable clean energy programs. This effort is aimed at accelerating the implementation of recovery act projects and programs, improving their performance, increasing the return on and the sustainability of recovery act investments, and building protracted clean energy capacity at the state, local, and tribal level. From one-on-one assistance, to an extensive online library, to facilitation of peer-to-peer exchange of best practices and lessons learned, the Technical Assistance Program offers a wide range of resources to serve the needs of the state, local, and tribal officials and their staffs.

Now, the Technical Assistance providers can provide short-term, unbiased expertise in a couple areas, and these include energy efficiency and renewable energy technologies, program design and implementation, financing, performance contracting and state and local capacity building. And in addition to the one-on-one assistance we're available to work with grantees at no cost to facilitate peer-to-peer matching workshops and training.

We also encourage you to utilize the TAP blog, a platform that allows for city, states, counties and tribes to connect with technical and program experts and share best practices. The blog is frequently updated with energy efficiency or renewable energy related posts, and we encourage you to utilize the blog to ask questions of our topical experts, to share your success stories, best practices, lessons learned, or interact with your peers.

You can also request direct assistance by going online to the Solution Center, or calling the Call Center, and once a request has been submitted it will be evaluated to determine the type and level of assistance that TAP will provide.

Now, Northeast Energy Efficiency Partnership is part of one of the five teams that make up the Technical Assistance network. Team 4 is comprised of nine energy efficiency organizations from around the country that provide expertise and assistance in planned implementation and design. We work closely with each other to fulfill the needs of all the grantees, regardless of where in the country they're located.

And to give you two quick examples of what we've done recently, a grantee was considering having energy audits done for three older municipal buildings and needed more information on how to go about this and what to look for when developing and reviewing the RFPs, so we developed a guide to municipal energy audits which they found very helpful.

In another situation where a grantee, a large city in the northeast, was part of a citywide energy efficiency effort in the process of putting together a guide for employees, so we developed a municipal energy efficiency and conservation policy document for the employees, which they too found very helpful.

And that's the Technical Assistance Network, and I'm going to turn it over to Julie Michals. Julie?

Julie Michals: Thank you, Ed. Welcome, everyone, to this Webinar. We have a number of things to cover for you today, and here's an overview of what we'll be talking about. Focusing on Evaluation, Measurement and Verification of energy savings from energy efficiency projects that you're undertaking as part of the EECBG and SEP ARRA funded grants.

We're going to be reviewing what EM&V is, why it's important. We'll be reminding everyone of what DOE guidance there is on EM&V to date. And then we'll be providing some high-level information about available energy savings estimates, data and tools and calculators available to support your efforts. We'll provide a high-level overview of M&V methods and approaches, including data collection. And closing with an overview of tracking and reporting of energy savings.

We want to make clear to everyone that this is intended to be a primer on EM&V. We recognize that there may be a whole range of audiences on this webinar, from large SEP grantees and their evaluators, to much smaller municipal or community block grant recipients who have little in the way of EM&V knowledge or resources.

So it was a bit challenging for us to pull together this broad range of resources and information for such an audience, but we hope this will be informative and will lead you to some good tools and resources. And importantly, at the end of our session here today, we're going to be conducting a poll to get your feedback on what additional EM&V information would be helpful to you in upcoming additional webinars or materials that we develop for assistance.

So with that, I'm going to give an overview of what is Evaluation, Measurement and Verification. And these are some basic definitions that break this into two pieces. M&V, commonly the acronym for Measurement and Verification, involves the collection of data (pre- and post installation of an efficiency measure, project or facility), and it supports the energy savings calculations using a range of activities, and we'll get a little bit more into that in this webinar.

Evaluation is the measurement and analysis of the performance of a set of programs or a collection of projects, in which M&V may be a subset of that that involves taking a statistical significant sample of individual projects as part of that broader evaluation effort. And therefore this then forms the basis for calculating total program or project savings. Typically I think of M&V as being part of a broader evaluation effort.

Some important terms to be aware of here. EM&V is used to determine primarily gross savings, which is basically the change in energy consumption and/or demand that results from program or project-related actions taken by participants, regardless of why they participated.

Net Savings involves looking at the change in energy consumption and/or demand that is attributable to the specific program or project. And in this case, let's say the grant that you received, did the saving happen because of that grant, they would not have happened without the grant.

DOE cares about net savings. Sometimes developing that number can be a challenge. It depends on what your circumstances are. But these are two references to defining savings that you'll see in some of DOE's guidance.

Why is EM&V important? It's important for a whole host of reasons. It supports the credibility of efficiency and basis for savings in terms of the number of jobs created. It helps you track cost-effectiveness of efficiency projects and determine how much money you'll save in terms of lower electricity bills. It helps to plan and inform future projects investments. And importantly, it has environmental impacts including reducing air pollution and helping meet city, community and statewide climate change goals.

We make a note here that importantly with everyone doing so much efficiency as part of the ARRA work, as well as many of you may be aware that many of the states have ratepayer funded efficiency programs, some of which are being coordinated with SEP and EECBG efforts. It's important to have consistency, and EM&V approaches the method to the extent we can be comparing apples to apples and aggregating apples with apples, we're going to be leading you to some resources and guidance that can help support as much consistency as possible.

Here is your basic equation for Energy Savings, which is taking your baseline or pre-installation energy use level, sometimes referred to as Base Year, subtracting energy use in the post-retrofit, and making adjustments to it. And those adjustments can be a number of things, and we'll be speaking a little bit to that issue. In particular there are some tools out there that address weather adjustments, and evaluation often addresses some of these other things as well. Measurement verification looks at changes in operating hours and these kinds of things.

So as you move forward with your projects, and I believe many of you are just receiving money and starting to plan and implement these, it's important to be simultaneously thinking about how you're going to conduct your evaluation, measurement, and verification. And that involves determining how you're going to determine the savings, what approach method will you use? As well as what data you're going to collect to support those savings and to evaluate and analyze your projects.

And in reality, these aren't steps that go particularly in the order of one and two, it may be that what data you can collect and afford to collect will determine what approach you use, and vice versa. So the reality is that each recipient will have to determine what your resources are, and based on that, what approach and data you can collect to get the best data from your project.

And finally, considering how you're going to track and report savings to DOE, and there's some information and tools on that.

Okay, so a couple more slides here in terms of DOE guidance. First of all, it should be made clear that DOE has indicated that EM&V is not required as part of grantees award agreement. However, it is encouraged for the reasons we've just discussed. DOE has developed a basic calculator tool that's referred to as the Recovery Act Benefits Calculator, that helps you provide some basic information to develop some initial estimates for savings. This particular tool is not intended to replace EM&V but it's there available for a planning tool or in the absence of any EM&V resources at all.

Another tool, a useful, available, and no cost tool for recipients is the Energy Star Portfolio Manager Tool. Some of you may be familiar with this. It measures and tracks pre- and post-installation energy performance at a whole building level, and it includes automatic adjustments based on including the zip code to weather normalize. Later on we have a bit more information on this, including a link to that tool. And I believe that there have been some webinars to date on this Portfolio Manager, I think as recently as last week or the week before.

And then finally, DOE encourages grantees that have resources to conduct more sophisticated EM&V, to conduct studies in accordance with the guidance they've given that is included here in this link, and we'll be reviewing that guidance momentarily.

Just a few things on the Benefits Calculator. Here are the specific guidelines where DOE indicates its intent for the use of this. It provides high level estimates of savings and emissions, but not intended to replace contractor or engineering estimates. The basic data entered would be things like number of units installed, horse power for motors, square footage and so forth. And the assumptions and formulas are built in to the calculator. It does include residential retrofit and commercial, non-residential retrofit. And again, as I noted, it's useful for initial savings estimates where you don't have any sources or data. And there's the link to that calculator if you're not already familiar with it.

And then here's some basic information for you on the Energy Star Portfolio Manager, a link to where you can find this tool, basic information on the type of building data that's needed, as well as referencing to measurement and verification approach IPMVP Option C, which is discussed later. It's a mouthful, you don't need to know exactly what that is right now.

I mentioned the earlier webinars, and then there's a link as well at the bottom here for training on this particular tool. And I believe that DOE may be providing some further guidance on use of Portfolio Manager for EECBG and SEP recipients, and that may be forthcoming, I'm not sure when.

Okay, so I'm going to turn it over now to Phil Sieper from Cadmus Group, and Phil is doing work as an evaluation contractor for the New York State Energy Research Development Authority, a very large ARRA recipient, and he's going to speak to the guidelines that DOE has on conducting third party evaluations. Phil, I'm going to hand it over to you.

Phillip Sieper: Thank you, Julie, and as has been mentioned earlier, the Technical Assistance center can help on a lot of questions, not only the EM&V, but also in any areas of finance program implementation workforce development and other areas as well. And as Julie mentioned, I'll be speaking primarily to the DOE guidance to date.

Julie did already mention that evaluations are not required by DOE, but they are encouraged. They're an important way of better understanding the impact of your program. If you don't already have a third party evaluator and are interested in engaging one, you'll most likely need to talk to your contract officer in doing so. And if you're going to go in that direction of using a third party evaluator, it's highly recommend to follow the guidelines that DOE specifies.

I'm not really going to have a lot of time to go into depth on these. There is a webinar that was done by Nick Hall, Faith Lambert and Marty Schweizer back in June. The link is provided in the slide. There's also a webinar with the audio available at the TecMarket Works website, which you can go to and get a more in depth view of the guidelines.

So that said, the guidelines are high level in nature. They're in line with the standards that evaluators are familiar with, the American Evaluation Association standards, California Protocols among others. They're primarily focused on four metrics for the DOE evaluation, and those are energy and demand savings, looking at resource and source energy savings, as well as for renewable energy. They're looking for capacity installed as well as generation achieved. And in addition to those areas, they're also looking for carbon emissions, and that's in metric tons, as well as job creation, and looking for number of jobs, type of jobs, and durations - one week, five years, a permanent job.

That information would then be fed into the one cost effectiveness that the DOE is requiring for SEP, which is the SEP rec test. Again, more information on that is provided through the other webinar that I mentioned earlier.

In looking at an independent evaluator you want to be assured that it's a third party, independent party that has no financial or management interest in the initiative being evaluated, and that they're not going to benefit or appear to benefit from the study's findings. Something similar to an audit that you have. You also should really clarify that there is no conflict of interest, whether real or perceived conflicts that may exist. And the evaluator should have expertise in the area that's being evaluated, that it be led by skilled and experience trained evaluation professionals.

Julie touched on attribution effects a little bit in her slides. DOE is looking for the net effects of these programs, basically above and beyond what would have been achieved without the ARRA funded dollars in the programs that are in place. The evaluation should look at the results, an effort of those ARRA funds, and not include the effects of other non-funded initiatives or activities.

If multiple programs are involved, you would want to split out the funded initiatives. Most likely by proportional funding allocated there. There are also more complex methods which I'm not going to have time to get into, but again, that Nick was able to do in his other webinar.

Regarding the evaluation budget. Typically budgets range from 2 to 5 percent of the overall program budget for evaluation. The DOE is giving guidelines to use 5 percent or less of your funding to set aside towards evaluation work.

And the timing of the evaluation should be started at the time of the project activity, really as early as possible. This allows for a better baseline evaluation approach, data collection efforts can be established early in the planning process, and if possible, any potential feedback can be early and possible course correction can be made without trying to influence or create bias in the programs, but often that can be helpful for the program implementers.

What's recommended is a state of the art analysis. Evaluation approach should use current state of the art evaluation approaches and analysis methods. They should maximize the use of technical evaluation advancements and the most current analytic approaches.

The approach should be as rigorous as possible. The results should be as reliable as possible within the study approach and budget limits. They obviously vary and in a later slide I'll be talking about and showing you some sample protocols with links of how to get there.

The study design should be independent of the project administrators and implementers. The independent evaluator should work with administrators and implementers to understand the operational processes and establish reliable and cost-conscious approaches.

They should take into account any threats to the validity. The independent evaluator should address and report any types of threats to the validity, to the study design and analysis approach. Examples of this would be self-selection bias in the surveys or false response biases.

It should also include any kind of alternative hypothesis involved. If the program didn't exist or in the absence of the program, what would have occurred, and rule out any alternative hypothesis that could occur in how the effects are being observed.

The independent evaluator will also want to make sure that the study is reputable, using a methodological approach and provide enough detail and study that it can be replicated by others.

The study plan should have many things specified, including the study methods and approaches, tasks to be conducted, a detailed approach to the data collection methods, a detailed analysis approach to each of the key metrics as mentioned earlier, the energy and demand savings, the renewable energy generated, carbon emission reduction, as well as jobs created.

In the approach and sampling and statistical significance, it should be a simple random sample or a stratified random sample or probability proportional to size as far as the sampling approaches that could be used, and the sampling should be no less rigorous than 90 percent level of precision with a confidence limit of plus or minus 10 percent.

One of the possible approaches is IPMVP which will be discussed a little bit later by Mark and the later slide will also provide a link to that.

So this next slide speaks to different technical manuals that are available. These allow for the program evaluator to use appropriate numbers or values for deemed savings, particularly to your area, if you're not familiar with the DEER database it's probably the best to go to and has the most amount of measures available to review and look at, but you do want to make sure that you're looking at whether normalized numbers as appropriate for your future area and your particular program.

This is a sample calculation that you can look at and use. It's a little bit detailed for our discussion right now, but the main take-aways here really are to look at the calculation in the sense of what would happen in the absence of the program, and then what actually occurred based on the program, and that being the impact of your program. This is your basic calculation and definitions to the components of that calculation.

And lastly in the slide that I would like to cover is just some different guidelines that are good to use and good to look at if you're not familiar with evaluation work. The first one mentioned here, DOE/EPA model, is a good place to start, and then the list actually gets more complex as you go down this particular list, depending on how familiar you are with evaluation work.

The last one there, the IPMVP, which will be discussed a little bit more in depth, is the most widely used and actually has been translated into many languages.

With that, I'm going to pass the presentation on to Mark, who will speak a little bit more about the measurement and verification.

Mark Stetz: All right, thank you very much for that. I'm assuming you can all hear me just fine. The two key components to M&V are to verify the potential to generate savings, meaning does the retrofit that was installed actually provide the savings as expected and as claimed. And then the other thing we need to worry about is actually quantifying and determining those savings, putting them in both kilowatt hours, kilowatts, and in some cases, dollar terms.

A simple example would be to upgrade a lighting situation where we can very easily quantify the fixture power both before and after, and that gives us the change in the expected rate of energy use. But the real question is, what are the real energy savings? That's determined by the number of fixtures and the number of operating hours that they're operating. A fixture count is fairly easy to determine, but operating hours really determine what the overall energy savings are, and that's often a harder number to quantify in a sense. And this is why we need to take measurements rather than just make assumptions about what our operating hours are.

So the questions of any project are were the baseline conditions accurately defined? What we need to do is define a baseline against which we're going to assess savings. Because whenever we report savings, they're always relative to an original condition. And secondarily, we also need to address, were the proper equipment systems installed. In addition to that, not only were they installed, but were they operating correctly, are they commissioned properly, and are they doing what they're supposed to do. Because it's very easy to install hardware, but it's also possible to do it in a way that doesn't generate any energy savings. And unfortunately it happens more often than we sometimes care to admit.

And this is what I mean by, are the systems performing to specification? In a qualitative sense, if the systems have the potential to generate the predicted or estimated savings. Are the compact florescent lights installed and do they remain? And then secondarily, is there continuing potential for savings? And a number of factors can address this. What is the long-term potential for savings? How long will these measures remain economically viable? And it comes back to equipment lifetimes, and also to operations and maintenance practices. In many cases you have to replace like with like. So if you put in T8 lamps, you have to replace them with T8 lamps if you're going to continue to generate savings for more than just a couple of years. So those are things that we need to address long term.

We assess these things both in the baseline and in the post-installation case by using inspections, just visual inspections, going out in the field and doing fixture counts and making sure that things are installed and operating as expected. We also try to do spot measurement tests at a minimum, taking simple15 second or 15 minute measurements to ensure that what has been installed is working as expected.

Commissioning is a related activity and there's a lot of overlap between measurement verification and commissioning, because both of those activities rely on visual inspections, observations of behavior and measurements. But the goals of those two activities are slightly different. But a lot of the commissioning activities that take place, feed back into our measurement and verification activities. So in that sense we derive added value from the commissioning activities because that information feeds back into our measurement and verification activities.

Really with Measurement and Verification what we're trying to do is figure out what would have happened, which is usually some kind of mathematical model or mathematical formulation based on what we think would have occurred in the absence of our measure. At one sense, it's easy to think of measurement verification as saying, "We'll look at this November's electricity bill and last November's electricity bill, and from that we'll subtract the difference." The problem with that approach is it ignores a lot of the factors that drive energy use that may change that have absolutely no relationship to the project that we have installed. If it factors on a building, such as weather or occupancy, also drive energies independent of anything that we may have upgraded.

So our "What Would Have Happened Meter", as we have postulated here, is really a mathematical formulation that takes all of the relevant factors into account and establishes a baseline from which we can assess energy savings. So in this particular example, that's what the red line is showing. You can see that it's not just mapping previous years onto this year's performance period. It's trying to understand in a mathematical sense what would have occurred here.

So that hypothetical meter measures, or more correctly, it defines the base year energy use, or the baseline situation against which we assess savings. So some of the different methods that are available to us are engineering calculations, and that goes back to some of the DEER estimates that were measured earlier with respective to deemed savings, and relies very heavily on manufacturers data and assumptions we make about typical operating behavior and typical operating practices.

We have spoken about the IPMVP, and that defines a number of options. But other things we can look at are what are called end-use metering. And in end-use metering you look at the actual equipment, or a sample of the equipment that has been installed, and make estimates based on that information. The nice thing about end-use metering is it tends to ignore things outside the boundary of a retrofit and derives energy savings only from what we have effected. And in that sense it's relatively easy to isolate what we have done and ignore everything else, and that gives us a rather accurate picture of how well our systems are working.

On the other hand, we can be a little bit more complicated and look at the whole building and the utility meters. And this is what most people think of, who are new to this field, when they think of measurement verification. When we do an energy efficiency project you expect your energy use to decrease, so you would expect a reduction in your utility bills. And that's exactly what Option C does, is it looks at your facility, but it also makes adjustments based on whether operating characteristics of the building and so on, trying to formulate that what would have happened, baseline scenario, and from that it postulates what the difference is, and from that we can claim our savings. It's a little bit more complicated than just looking at last year's and this year's bills.

And then lastly we have calibrated computer simulation. In this sense, what we do is we actually simulate the performance of the building using simulation software like eQUEST or DOE-2, and make assumptions about how the building would have operated under specific operating conditions. And there are reasons we may want to do this, one of which might be if we're building a new building and we have no historical baseline against which to claim savings. So we can create our building in software rather than build a duplicate building and use that as our baseline. So it provides a very flexible means of assessing savings.

All of these are defined in the International Performance Measurement and Verification Protocol, and the latest version is 2010. And what it does is it provides a framework for definitions and methods of the four options that I've mentioned above. But it doesn't provide guidance on evaluation activities. So IPMVP really addresses facilities and buildings at the project level, but it does not tell you how to evaluate a number of projects that have occurred over a large number of building. So all the information that we do at the building level and the facility level feeds into program evaluation, but the IPMVP doesn't really address that.

So regardless of what approach we take, the similar steps we have to take regardless of how we do this are to, one, gather the baseline data, and this includes our energy use, our demand, and our operating conditions. How does the building operate, how does the facility or system operate, and all of the information that we would need to project what the baseline energies would have been in the future once we've made a change to the facility. Sometimes if we don't gather that data rigorously enough, we run into problems trying to estimate how the project is working in the future because we don't really have an accurate picture of the past.

When we have that and a number of other factors, we can then develop a project specific M&V plan. For any given facility, how are we going to address savings. And even with the four options I've outlined, there are a large number of variations in our approach. So we really need to develop a project specific M&V plan to address each facility, and this is where some of our time and effort gets spent.

With that we can then verify, once the system has been installed, that the proper equipment systems were installed and are performing to specification. This is where some of the commissioning activities come into play, but really what we're trying to determine is what do these systems have the potential to perform. Because if you think about it, we can put this equipment in and in the first week or so we can say, "Okay, this has the potential to perform," but because a year has not elapsed we may not be able to absolutely reliably estimate how these systems will perform in the future. Because a lot of that comes back to how the systems are actually used in reality, not how we think they're going to be used. But what we want to do is verify that the systems are working and have the potential to perform or have the potential to generate savings.

And then lastly after some period of time has elapsed, we actually need to gather post-retrofit measured data and compute energy and demand savings and cost avoidance as defined in the M&V plan. In other words, once we've written a plan, we now have to implement that plan to calculate, to measure the performance of the system, and then compute all of these factors as defined in the plan.

Here I'm not going to read you all the points in the table, but these are five different approaches we can use, four of which are defined by the IPMVP. Let me just call your attention to the first line item, engineering calculations. The advantage of using engineering calculations in deemed savings are that it's low-cost and relatively easy. And the disadvantage, as I've pointed out, is that it doesn't really have a lot of rigor at the project level. So in the programmatic sense we can apply engineering calculations, but that doesn't mean that all those results apply at the individual level.

An example of that might be a residential lighting retrofit program where every household is different. So if we take a deemed value, it doesn't mean that each individual household is going to save the same amount of energy on a per light bulb basis. But over thousands of households and thousands of lighting upgrades, this will get us to an answer relatively easily without sending an engineer out into every household to make a lot of measurements.

In one sense, with respect to the IPMVP, this does not constitute measurement verification because no measurement or verification has occurred at the individual project level.

With respect to the others, we have end use metering, two different flavors of that, which are also relatively easy to implement, but a little bit more difficult, for example, than the deemed savings calculations.

Whole building level, option C, it's easy to understand the results, and conceptually this is easy to understand. What we're doing is finding ways to look at utility bills and derive what the savings are after making adjustments for changes to the facility operation. The problem with this approach is that it's influenced by a number of external factors that we have to keep track of. So it requires a stable facility and it takes time to start deriving results from this.

Option C is suitable when savings exceed 10 percent of the facility baseline energy use. But this is what Portfolio Manager is doing, is it's making a rather simple approach in an Option C methodology, because it does look at some of the external factors. So it's not a particularly rigorous approach, but it is one way to implement Option C.

So I will turn it over to our next speaker.

Julie Michals: Thank you, Mark. We just have one slide here. It sort of ties to some of the stuff that Mark was talking about in terms of M&V approaches and related data collection. And we noted earlier that data collection really can depend on what approach you're using, what resources you have. We've noted here that basic data collection is provided in, for example, the Portfolio Manger tool, which has examples of data needed to support measurement verification at whole building level, by building type. They have a number of building types within that tool, and here is listed the type of information that would be needed.

There are much more detailed data collection requirements, needs, depending on where you look. Better Buildings is doing some work on this. There is also reference here to the California Evaluation Protocols. I believe there's a home performance XML data collection database. There's a number here that aren't listed, we're not going to get into detail for this webinar, and again it depends on what approach you're using. But at this point, this information here should be helpful in giving you kind of the spectrum.

And again, for the SEP grantees, I believe in the webinar that was provided earlier this summer on third party evaluations, I think there is a reference made to the California Evaluation Protocols to this particular page where there's a list of detailed data to be collected.

And then very briefly on tracking tools, we have heard somewhat anecdotally that recipients are interested in learning about what available tracking tools there may be to support EM&V activities and data collection in general, and that being the basic function for tracking is to record collected data. And again, like anything else, these systems can vary from very comprehensive to simple spreadsheets. We did not for this webinar go into any detail on examples of what tracking systems are out there. We do note Portfolio Manger is one example, and we invite recipients to inform us whether or not gathering more information, perhaps a dedicated webinar in the near future on this topic, if that would be something helpful.

So again, important to know that having the ability to track data, whether it's a simple spreadsheet or something much more sophisticated, is an important part of managing your EM&V activities, and likely we can provide more information in that regard. And in some cases, those can feed into reporting templates.

These are specifically some information on the existing guidance from DOE on reporting requirements. I would imagine many of you, if not most of you are familiar with what's involved here in terms of energy savings, costs, and also beyond efficiency, renewable energy capacity generation, associated emission reductions, and then specific process metrics. And then we have links here to where you can find those reporting guidelines. I believe they're pretty consistent for both the EECBG and SEP grantees.

So with that, we're going to leave time here certainly for some questions. I am going to note we're at 2:45. We have 15 minutes left. So I think we're going to spend about 10 minutes on questions and then if those of you on the phone could kindly please hang in for the last five minutes of this webinar, we will be conducting a poll which will introduce to you on your screen a ranking of possible next webinar topics, building off of some of the discussion today of what you would see as being important and valuable so we can be as informed as possible to bring you the best information that you need at this time.

So with that, I'm going to introduce Elizabeth Titus, my colleague here at Northeast Energy Efficiency Partnership, and she's been recording all the questions that are coming in. Thank you for your questions, and I'm going to have her read them out to everyone, and we will open the phone up to our speakers as well as we have Nicole on the phone to answer any questions SEP related and Keith Dennis from US DOE, to answer any questions that may have to do with EECBG activity.

Elizabeth Titus: Good afternoon everybody, let me first start with two sort of process questions which are: Are copies of the slide show available and will they be posted on line?

Ed Londergan: The presentation will be posted on the DOE Solution Center within three to five days.

Elizabeth Titus: And then there were really three different questions that have come up so far, and the first one was: Are there some historical EM&V studies that were done by research facilities such as National Labs or universities, controlled settings, that might provide useful data? And this question was asked around the time that the presentation was looking at the meter for "what would have happened". Would any of the speakers like to address this?

Mark Stetz: This is Mark, and I'll try and do my best on that one. There have been a number of studies. One was done at the Lawrence Berkeley National Laboratory, but it addresses more some of the loss mechanisms, what are the primary energy efficiency opportunities. It gives you more of a probability of finding particular problems at a facility. With respect to finding out actual energy consumption, that one's a little bit harder. If you can send me that question via email, I can see if I can attempt to come up with some better answers.

Elizabeth Titus: Thank you. Any of the other speakers want to address it? Nicole, are you on the line? I want to also mention that I'm aware that many evaluation contractors have collections of load shapes for different building types that they put together, and that may be one source of information. And also the Department of Energy has a commercial building energy consumption survey, which is another source of very general information characterizing commercial building energy consumption. So those are two kinds of data and kinds of starting points that are available, not necessarily from universities or research facilities, but they may be helpful.

Phil Sieper: This is Phil Sieper. Also on slide 20, the California Evaluation Protocols, that link there to Calmac.org, I believe they may actually have some of that information as well on that website from some evaluations that have been performed in California.

Elizabeth Titus: Moving on, the next question was: How does one verify behavioral measures?

Mark Stetz: This is Mark again, and unfortunately, those tend to be the hardest to evaluate because the problem is, you can't put data loggers on people for any extended periods of time. Two possibilities are to evaluate, for example, lighting operating hours and see if they have been reduced as part of an educational campaign to encourage people to turn off lights when they leave the room. And the other is to actually look at the overall energy use of the facility.

The second one tends to be a little more difficult, simply because in behavioral changes, the overall energy savings tend to be very small on the order of one to two percent, and that's very difficult to tease out in a statistical sense from the utility bill data. But the other approach might be to put in lighting operating data loggers or some other system that tracks lighting run time and see if there's any change. But it is a difficult problem and sometimes measures of that nature end up relying on deemed values from studies that other people have done.

Elizabeth Titus: Yes, and any other speakers want to weigh in on this question?

Keith Dennis: This is Keith Dennis. I agree with the earlier statement and just mention that if you are tracking using a whole building approach, you will see the results of behavior change in the actual performance of a building. But again, the attribution to the actual behavior change might be more difficult, but in general if you're performing a series of operations or upgrades in a building and also doing some educational and behavior type projects, if you look at the whole building level, that would be captured in that kind of data set.

Elizabeth Titus: Thank you. And I do want to just add that when all else fails, or to supplement these other things, it's common to do surveys of building occupants, for example, and while it's self-reported data, if you carefully craft a set of questions, you can often help document what effects of your training or your program have been.

A third related question is: How does one monitor or verify energy savings from workshops. For example, workshops where tool kits are handed out? This relates to how does one evaluate training?

Mark Stetz: This is Mark, I'll try and take this one again. If I'm understanding the question, I've heard of some of these workshops, and for example, you'll have a group of homeowners and you'll hand out energy saving kits, and it may consist of a programmable thermostat, faucet aerator, low-flow showerhead, stickers to put on switches, and a compact florescent or two. Again, the problem here is that you have sort of diffused savings at the individual homeowner level that may save 5 or 10 percent, but over a large number of homeowners, how do you get at that? And it's pretty difficult to actually conduct real measurement verification on that approach. And essentially you have to do two things: use your deemed savings database to figure out what individual measures might save, like your aerators and your showerheads. In the others you have to figure out what is the use relation rate. It's one thing to hand out a kit, but how many of those kits are actually installed versus sitting on a shelf somewhere in the closet never to be used. So those two pieces of information can sort of inform savings estimates. But due to the diffused and small nature of that, measured evaluations are sometimes very difficult to implement.

Elizabeth Titus: Other speakers want to comment on this one? I want to also add that just to help characterize the baseline conditions, it may also be helpful to issue a survey right at the same time during the actual workshop so that you get a sense from the workshop participants of what kind of home or facility they're in and characterize the baseline that way. And then go with a follow up survey how they have used the toolkit or whether they have used the toolkit.

Phillip Sieper: This is Phillip, I would actually follow up with that. I completely agree, and a good place to do that as well is that there's a registration that's occurring, doing a preregistration, have some questions there. And then exit survey questions at the completion of any kind of training or workshop. And then a follow up on a sample of the ___ at a period later, depending on the timing that you have for getting those results. Six months to a year later to see how they've used the training in that workshop.

Elizabeth Titus: Thank you. And then finally there are a number of places in this country where building operator and maintenance programs have been offered, and some of those have been evaluated, and I can say that an example of that is I think available at the _______ website that Julie posted in one of the slides here. And that will give you an example of an approach for a commercial building.

Julie Michals: Thank you, Elizabeth. So we still did receive some additional questions, but we're at the last five minutes of our webinar and we would like to do this poll. I did see that a number of the questions are actually asking about additional resources or suggestions for what we might focus on, and we will get back to those folks who submitted those questions, one way or another, and incorporate them into our next steps, or certainly address them. So thank you for those questions and sorry we can't get to them all right now.

So with that, we're going to conduct this poll, and what you're going to see is Leslie, I hope, is going to start the poll for us, and we're only going to give 30 seconds to respond to each poll. There's going to be a list of six items that we would like you to rank as very important, important, not important, or don't know.

The first one is whether we should provide any guidance or template information on how to develop an evaluation request for a proposal for a third party contractor.

And, Leslie, is the clock going on that? Does everybody see that on their screen?

Leslie: Can you see it on your screen?

Julie Michals: Yup.

Leslie: Okay, you just want to give it about 30 …

Julie Michals: Let's just give it 30 seconds, and if the folks could fill out that first poll question.

Leslie: Okay, it's been about 30 seconds, so we'll pull up the next slide.

Julie Michals: Okay, the next option is an M&V plan, Measurement and Verification Plan template.

Okay, the next one is not up yet, hang on.

Leslie: Okay, the next one.

Julie Michals: The next one is, more detail on Evaluation Measurement and Verification methods and approaches. So to go more in depth than what we did today.

Leslie: Okay, I'll get you the next one.

Julie Michals: Okay, the next one is examples of tracking tools.

The next one is the idea to actually have SEP grantee and EECB grantee case studies provided in a webinar so you can hear directly from some recipients on how they're conducting their EM&V activities.

And finally we invite other topics. And I don't know if people can write these in directly. Perhaps is there a way that folks can send other topics of interest directly to - oh, there is room for writing. There is space for you to write there, so if you have other ideas, please do make a quick note.

So with that, we're done with the poll. We would like to thank you very much for conducting that, and Ed Londergan here is going to close out our webinar with a couple more slides.

Ed Londergan: This webinar will be placed on the DOE Solutions Center site in the next three to five days for your reference. Also, as you can see, there are several other webinars coming up over the next few weeks, and we invite you to join us for those very good topics and all timely material.

Thank you for joining us this afternoon. We hope you found it interesting and informative.

Julie Michals: Thank you very much, everyone.