Intelligent Transportation Systems
print

ITS Evaluation Guidelines – ITS Evaluation Resource Guide

Introduction

The Safe, Accountable Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) prescribes that the U.S. Secretary of Transportation shall issue guidelines and requirements for the reporting and evaluation of operational tests and deployment projects carried out under this Subtitle [i.e., Subtitle C, Title V]. The document fulfilling this mandate is titled SAFETEA-LU Reporting and Evaluation Guidelines. The document achieves several purposes by: defining different categories of projects carried out under Subtitle C, Title V, of SAFETEA-LU; defining the provisions to ensure objectivity and independence of reporting entities so as to avoid any real or apparent conflicts of interest or potential influence in the outcome, and defining reporting funding levels based on the size and scope of projects to ensure adequate reporting of results. The SAFETEA-LU Reporting and Evaluation Guidelines also describe a recommended and proven process for conducting evaluations. A principal purpose of this ITS Evaluation Resource Guide is to expand and elaborate on recommended evaluation procedures outlined in the SAFETEA-LU Reporting and Evaluation Guidelines.

This document is divided into the following sections:

  • This Introduction
  • A primer that presents the Federal philosophy on ITS evaluation, as well as a definition of "independence" as it relates to an independent evaluator
  • A detailed description of the recommended six-step process for ITS evaluation, including background information how to collect data in support of key performance measures of ITS goals

This document also provides examples of each type of document required by the six-step evaluation process: Forming an Evaluation Team, Developing an Evaluation Strategy, Developing an Evaluation Plan, Developing Detailed Test Plans, Collecting and Analyzing Data, Preparing a Final Evaluation Report. Finally, Annex B to this document lists bibliographical references that describe certain evaluation techniques in more detail.

Definitions of Evaluation and Independence

Evaluation is the reasoned consideration of how well project goals and objectives are being achieved. The primary purpose of evaluation is to cause changes in the project so that it eventually meets or exceeds its goals and objectives. Evaluation is an essential ingredient to good project management. Evaluations can be either qualitative or quantitative. However, the best evaluations employ a combination of both qualitative and quantitative information that compare and contrast converging and possibly conflicting evidence. The most effective evaluations occur when goals and objectives are explicitly stated, are measurable, and are agreed to by all parties involved.

Evaluation should be considered an integral part of the project development process, and be considered in each phase: strategy formulation, detailed planning, system design, system implementation, data collection, data analysis, and reporting of results. Evaluations should be performed by an independent party who has no vested interest or stake in the project itself.

Independence of the evaluator does not mean lack of involvement in the project. Key roles of the evaluator requiring early involvement in the project are:

  • Identification of key stakeholders and partners
  • Eliciting from the partners a meaningful set of goals and objectives for the project and their relative priorities
  • Obtaining insight and consensus regarding which measures will indicate the degree to which project success has been achieved
  • Communicating changes in goals, objectives, and measures as the project progresses

Data can be collected either by the partners in the project (as long as the independent evaluator maintains some oversight of the process) or by the independent evaluator, or by both. The data analysis phase, in contrast, must be performed completely independently of the partners. However, interim results can and should be shared with the partners to obtain their insights regarding possible flaws in assumptions or errors in the analysis. The best evaluations results are those which lead to improvements in the system itself. In such instances, documentation of results may not be as important as the improvements caused by the results. Nevertheless, each ITS evaluation has the potential to contribute to the growing body of knowledge about costs and benefits of ITS applications. (See the section on reporting requirements in Recommended Evaluation Process Step 6, below.)

Recommended Evaluation Process

The ITS Joint Program Office recommends employing the following six-step process for ITS project evaluation. This process has been employed successfully by many ITS projects in the past.

Step 1. Form the Evaluation Team

Each of the project partners and stakeholders designates one member to participate on the evaluation team. The program manager should designate an evaluation team leader. In the interest of conducting an effective evaluation, this team should interact with the independent evaluator periodically throughout the project development and deployment phases. Experience has demonstrated that formation of this team early in the project is essential to facilitating planning and avoiding surprises later on in the project time line. Participation by every project stakeholder is particularly crucial during the development of the Evaluation Strategy.

Step 2. Develop the Evaluation Strategy

The Evaluation Strategy includes a description of the project to be evaluated and identifies the key stakeholders committed to the success of the project. It also relates the purpose of the project to the overall ITS goal areas, such as safety, mobility, efficiency, productivity, energy and the environment, and customer satisfaction. These above-listed goal areas have been used successfully in numerous independent evaluations and self-evaluations conducted under the authority of the Transportation Equity Act for the 21st Century (TEA-21). ITS projects conducted under SAFETEA-LU should be evaluated according to their impact on these goal areas.

A major purpose of the Evaluation Strategy development process is to focus the partners' attention on identifying which of the above goal areas have priority in their project. Each partner should assign numerical ratings of the magnitude of importance to the goal areas. One easy method is to have each partner independently allocate 100 points to reflect the level of importance their organization believes that the project has in achieving each goal area. Then, from these ratings made by individual project partners, a set of ratings for the collective group can be determined. The process used to arrive at collective ratings could be as simple as taking the average of all of the ratings, or it could employ an iterative Delphi technique involving the partners. This collective rating of all the goals form the evaluation priorities. Project evaluation resources can then be assigned that are in keeping with the evaluation priorities of the group. This process gives partners valuable insight regarding areas of agreement and disagreement. The process also assists the partners in reconciling differences, building a sense of teamwork and trust, and also creating expectations about exactly what will be evaluated and the relative importance the evaluators will place in each goal area.

Included in Appendix A of this document is a detailed description of the National ITS Program goal areas and associated measures for each. The "few good measures" discussed in Appendix A constitute the framework of benefits expected to result from deploying and integrating ITS technologies. While each project partnership will establish its own unique evaluation goals, the measures serve to maintain the focus of goal-setting on the overall objectives of the project. In addition, if all ITS projects employ these "few good measures" in their evaluations, this standardization facilitates the National ITS Program's analysis of similarities and differences in evaluation results, making it easier to compare "apples to apples." In addition, these few good measures are useful not only at the conclusion of the project, but also during the life of the deployed system. As data are collected, these results can be used to adjust various aspects of the ITS application(s) being tested or deployed so as to improve overall system performance.

The table in Appendix B contains examples of several Evaluation Strategies. In particular, the FORETELL Evaluation Strategy [PDF, 115KB ] document provides an example of how project partners used the goal weighting process to determine partners' collective evaluation priorities, and thus allocate resources reflecting the collective priorities of the evaluation team. The Evaluation Plans (see next section for more detail about what is contained in an evaluation plan) for the two field operational tests of traveler and tourist information systems - one along Interstate-40 in northern Arizona [PDF, 484KB] and one in Branson, Missouri [PDF, 720KB] - have merged the evaluation strategies into the evaluation plans, and provide the best examples of the rating process used by the partners.

Step 3. Develop the Evaluation Plan

After the goals are identified and evaluation priorities are set by the partners in the Evaluation Strategy, the Evaluation Plan should refine the evaluation approach by formulating hypotheses. Hypotheses are "if-then" statements that reflect the expected outcomes of the ITS project. For example, a possible goal of coordinating multiple jurisdictions' traffic signal systems is to improve safety by reducing rear-end crashes. If the Evaluation Strategy includes this goal, the Evaluation Plan would formulate one or more hypotheses that could be tested. In this case, one hypothesis might be: "If jurisdictions coordinate traffic signal timing, then rear-end collisions will be significantly reduced at intersections at jurisdictional boundaries." An even more aggressive hypothesis might suggest that such collisions would be reduced by 10%. The evaluation plan identifies all such hypotheses and then outlines the number of different tests that might be needed to test all hypotheses.

In addition to hypotheses regarding system and subsystem performance, the Evaluation Plan identifies qualitative studies that will be performed. The evaluation should address key components of the project, such as, but not limited to:

  • Implications of achieving consistency with the National ITS Architecture
  • Standards implementation
  • Consumer acceptance
  • Institutional (non-technical) issues

An area of special emphasis in all evaluation efforts should be the non-technical factors that influenced project performance. Institutional factors such as procurement practices, contracting policy, organizational structure, and relationships among major participants such as prime contractors and their subcontractors had a profound influence on ITS projects conducted under the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) and TEA-21. The ITS community still stands to gain from an understanding of institutional issues gained from project and major ITS initiative evaluations conducted during the era of SAFETEA-LU. Of particular interest is how the wide range of non-technical factors impacts directly on traditional project performance factors, such as cost, schedule, and functionality of the new technology.

The table in Appendix B contains examples of Evaluation Plans for several different types of projects: state/local-funded studies, field operational tests (FOTs) and deployment evaluations. These documents show how evaluation hypotheses were developed, given the set of collective evaluation priorities.

Step 4. Develop One or More Test Plans

A test plan will be needed for each test identified in the Evaluation Plan. A Test Plan lays out all of the details regarding how the test will be conducted, and identifies the number of evaluation personnel, equipment, supplies, procedures, schedule, and resources required to complete the test.

The table in Appendix B contains examples of test plans that were developed for two traveler and tourist information systems field operational tests along Arizona I-40 and in Branson, Missouri. The Arizona I-40 evaluators developed test plans for focus group surveys [PDF, 51KB], a route diversion study [PDF, 51KB], historical data analysis [PDF, 84KB] and a tourist intercept survey [PDF, 435KB]. The Branson, Missouri evaluators developed test plans for focus group surveys [PDF, 803KB], historical data analysis [PDF, 116KB], a tourist intercept survey [PDF, 435KB] and travel time surveys [PDF, 157KB]. These documents provide examples of how individual Test Plans are derived from hypotheses contained in the project Evaluation Plans.

Step 5. Collect and Analyze Data and Information

This step is the implementation of each Test Plan. It is in this phase where careful cooperation between implementing partners and evaluators can save time and money. By early planning, it is possible to build automated data collection into the implementation plan for the project. After completion of the project and the evaluation, the partners can continue to use data provided by automated data collection for continual refinement of the system.

Step 6. Prepare the Final Report

The final step in the evaluation process is to document evaluation strategy, plans, results, conclusions, and recommendations in a Final Report. The table in Appendix B contains examples of Final Reports of evaluation projects that also prepared Evaluation Strategies and/or Evaluation Plans, if these Final Reports are currently available. In addition, the Documents section of the ITS JPO Web site contains dozens of other evaluation Final Reports. These Final Reports exhibit considerable variation in terms of the level of detail they present. Different levels of detail are appropriate for different audiences. One successful strategy is to publish the Executive Summary as a separate document intended for top-level decision-makers.

When the Final Report has been finalized and published, it should be distributed to all members of the Evaluation Team and any other local partners that were involved in the evaluation process. In addition, in order to have the results from ITS project evaluations contribute to the growing body of knowledge on ITS costs and benefits, a copy of the Final Report should be sent to the appropriate JPO program manager who exercises oversight responsibility for the project or ITS initiative at the following address:

Intelligent Transportation Systems Joint Program Office (HOIT)
U.S. Department of Transportation
1200 New Jersey Avenue S.E
Washington, D.C. 20590

Attn: Name of appropriate JPO program manager

Delivery of the Final Report in electronic, as well as hard copy forms to the JPO will speed integration of the ITS evaluation results and lessons learned into national databases on ITS costs and benefits. Electronic versions of Final Reports can be sent to:

itspubs@dot.gov

Appendix A - Evaluation Measures

In the section that follows, each of the National ITS Program goal areas is presented, along with key measures of effectiveness (MOE's) associated with each goal and a short discussion on how to measure each.

Safety

The measures of effectiveness used to quantify improvements in safety are:

  • Reduction in the overall rate of crashes
  • Reduction in the rate of crashes resulting in fatalities
  • Reduction in the rate of crashes resulting in injuries

Overall, injury and fatality crash rates can be measured in the number of crashes per unit time. However, given that highway crashes are highly random events with a likelihood of occurrence that increases as travel increases, normalizing crash rates by including an exposure factor, such as million vehicle miles traveled, is recommended.

When evaluating the performance of an ITS project, a comparison can be made between crash rate (or fatality rate, or injury rate) in the period before and the period following the implementation of an ITS technology. Care should be given to the length of the study period and the collection of data in both time periods. It should also be noted that, due to the random nature of crash occurrences, it may not be possible to prove statistically that there was a significant difference between the number of crashes in the "before" and "after" periods. For these reasons, surrogate measures may provide a better (or at least equally desirable) indicator of the safety gains of ITS applications. For example, the use of variable message signs may reduce speeds during inclement weather, which in turn, is expected to reduce the risk of an accident occurring. Video surveillance has also been used to observe driver behavior at signalized intersections. In these projects, a reduction in red light running violations can be used as a surrogate measure for safety.

The U.S. Department of Transportation has issued guidelines to assist local governments in evaluating safety improvements. These guidelines may be used to assist in evaluating safety improvements.

Mobility

The measures of effectiveness used to quantify improvements in mobility are:

  • Reductions in travel time delay
  • Reductions in travel time variability

Travel Time Delay

Delay to a user of a system is typically measured in seconds per vehicle or minutes per vehicle of delay. Delay can be measured in many different ways depending on the type of transportation improvement being evaluated. For example, when evaluating mobility gains of an adaptive traffic signal control system, the "floating car" method can be used to measure the delay experienced before and after installation of the system. In the "floating car" method, evaluation personnel measure the time it takes a probe vehicle to traverse an arterial, usually using a stopwatch or other time measurement equipment. Delay can also be measured by observing the number of stops experienced by the drivers before and after a project comes on-line.

Travel Time Variability

This measure indicates the variability in overall travel time from an origin to a destination in the transportation network, including any modal transfers or en-route stops. This measure can readily be applied to intermodal freight (goods) movement, as well as personal travel. Reducing the variability of travel increases predictability around which important planning and scheduling decisions can be made by travelers or freight companies. By improving response time to incidents, and providing information on delays, ITS services can reduce the variability of travel time in transportation networks. One example of an ITS application that reduces travel time variability is when the dispatcher of a commercial trucking company uses a real-time traffic information Web site to gain information about delays and re-route drivers around congested areas. The calculation of travel time variability involves an analysis of the spread (or distribution) of travel time around the mean (or average) travel time. Travel time variability can be calculated under different time horizons, such as within day and day-to-day variability of a given trip or goods movement from an origin to a destination. Several types of statistics can be computed on a travel time data set which are indicative of the variability. For example, travel time variability can be computed as the standard deviation or variance around the mean. The range of travel time values (low to high) is another indication.

Capacity/Throughput

The measure of effectiveness used to quantify improvements in capacity/throughput is:

  • Increase in throughput or effective capacity

One goal of many ITS projects is to optimize use of existing facilities and rights-of-way so that travel demands can be met while reducing the need to construct new facilities or expand rights-of-way. One way to accomplish this goal of improving network efficiency is by increasing the effective capacity of the transportation system.

Effective capacity is the maximum potential rate at which persons or vehicles may traverse a link, node, or network under a representative composite of roadway conditions. Capacity, as defined by the Highway Capacity Manual (HCM), is: "maximum hourly rate at which persons or vehicles can reasonably be expected to traverse a given point or uniform section of a lane or roadway during a given time period under prevailing roadway, traffic and control conditions." The major difference between effective capacity and capacity, as defined by the HCM, is that capacity is assumed to be measured under good weather and pavement conditions and without incidents, whereas effective capacity can vary depending on these conditions and the use of management and operations strategies such as ITS.

Throughput is defined as the number of persons, vehicles, or units of freight actually traversing a roadway section or network per unit time. Increases in throughput are sometimes realizations of increases in effective capacity. Under certain conditions, measured throughput may reflect the maximum number of vehicles that can be processed by a transportation system. Capacity (and effective capacity) is calculated given the design and operation of the network segment and does not change unless the physical construction or operation of that network segment are changed. In contrast, throughput is an observable measure, and thus is an MOE for the efficiency ITS goal area. Care must be given to interpreting results, however, because throughput changes may be due to factors beside effective capacity changes (e.g., changes in demand). Thus, not all throughput changes are indicative of improvements in the efficiency of a given situation.

Throughput can be measured by taking volume counts of the number of persons or vehicles traversing a roadway section or network per unit time. One type of ITS application that has been demonstrated to increase throughput is ramp metering programs that pace vehicles' entry onto the freeway, thereby allowing a greater number of vehicles to travel on a freeway network at higher speeds. Another example is electronic toll collection systems that allow more vehicles to traverse through toll plazas at higher speeds.

Customer Satisfaction

The measuring of effectiveness used to quantify improvements in customer satisfaction is:

  • Difference between users' expectations and experience in relation to a service or product

The central question in a customer satisfaction evaluation is: "Does the product deliver sufficient value (or benefits) in exchange for the customer's investment of money or time?" Given that many ITS projects and programs are developed to serve the public, it is important to ensure that our users' (i.e., customers') needs are being met or surpassed. A detailed review of how to evaluate customer satisfaction is included below after a brief discussion on how to evaluate the transportation providers’ satisfaction with ITS projects.

The following are six stages of a customer's experience with a product or service. Each stage is a function of many factors, including the previous stage or stages. This behavioral model has been used successfully in the past to evaluate customer satisfaction with ITS products are services.

  1. Product awareness
  2. Expectations of product benefits
  3. Product use
  4. Response - Decision-making and/or behavior change
  5. Realization of benefits
  6. Assessment of value

1. Product awareness. There are many ways that a customer can become aware of a product or service, such as advertising, outreach, word-of-mouth, customer curiosity, or a combination of multiple factors.

2. Expectations of product benefits. Determination of a customer's expectation of product benefits is more complex, and relates to the individual's prior experience and the way in which awareness was created. For example, a toothpaste advertisement that promises whiter teeth may create greater expectations when placed in a dentist’s journal than in a fashion magazine. Expectations are also influenced by the medium, the industry, or company that manufactures or sells the product. The more reliable the medium that delivers the services, the greater the expectation for a reliable service.

3. Product use. Product use is a function of awareness and expectation of benefits. Before a customer will actually use a product, he or she must know about the product, and expect to receive some benefit from its use. Later, once the customer has had experience with the product, and understands its benefits, product use becomes a function of context. For example, people put on warm boots when it snows.

4. Response. Customers use an information product, such as traffic information, primarily to inform a decision or obtain an assessment of uncertain conditions: "Should I leave the office now, or wait an hour?" "Am I going to be sitting in this traffic congestion for hours or minutes?" The outcome of such a product use may be a change in behavior or a change in expectations of the customer. In adjusting behavior, customers judge the value of a product by whether the product delivers as expected: "Was the information correct? Am I better off now than I was before I used the product?" Satisfaction is then a function of expectation (which is itself a function of awareness), product use, and response.

5. Realization of benefits. Benefits are dependent on the quality of the product and the context in which the product is used. With a product such as traffic information, realization of product benefits is affected by the contextual conditions such as the quality of the information, the customer's personal travel flexibility, geography, and weather. For example, if the customer has only one route, mode, or time period available to her, she may receive less of a benefit from product use than a customer who has a greater selection of possible response options.

6. Assessment of value. Value is a function of the customer's investment in obtaining the benefits, in relation to the benefits themselves. An example of a situation where the cost of the investment outweighs the benefits received is the following: "It took me 20 minutes to open the traffic Web page, and once I did, the congestion was so widespread, I couldn't do anything to shorten my trip. If I'd left 20 minutes ago, I'd be halfway home by now." An example of a situation where the benefits outweigh the costs is: "Every day the morning traffic congestion peaks at a different time. If I can time my departure to avoid peak congestion, I can enjoy a stress-free drive to work in the morning." Individuals frequently have trouble assessing the value of a product that they do not pay for directly, and this difficulty in assessment increases the challenges of measuring product value.

It is particularly challenging to measure customer satisfaction with a new, innovative, or break-through product or service. When customers have not had extensive personal experience with a new product, they don't know what to expect. Development of a realistic appraisal of benefits and value can take anywhere from a few trials to a few months. The most difficult products to evaluate are those that require a shift in behavior to use. Real-time traveler information services meet all these criteria. They offer a significant improvement in the quality over traditional radio traffic reports. They require a change in behavior to use, e.g. open a Web page before leaving the office. They need a further change in behavior in order to experience the benefits from use and to assess value, e.g. a change in travel time, route or mode, or a decision not to travel. Finally, customers can experience benefits of real-time traveler information services even when no changes in travel behavior have occurred. Simply knowing why traffic is blocked or when it is likely to clear up is reassuring and reduces stress. All of these factors compound the challenge of designing an evaluation that accurately captures customers' satisfaction with ITS products and services.

The following are measures for customer satisfaction with real-time traffic information:

  • Awareness of product
  • Expectations of product performance and product benefits
  • Product usability - presentation or organization of information, product design features
  • Information quality and credibility
  • Travel decisions and behavior as a result of product use over time
  • Benefits (or disbenefits) realized from product use
  • Value of product - willingness to pay

There are several ways to collect data in support of the metrics listed above. These data collection methods can generally be categorized into two types: qualitative (focus groups) and quantitative (stated preference surveys, revealed preference surveys).

Conducting focus groups is a qualitative research method that explores the way in which the customer uses and values the product. Focus groups enable deeper exploration of user perceptions, values, and behavior. They can be used alone to provide insight into a situation, or serve as a basis for the development of quantitative surveys. Focus group survey results have no statistical significance and should not be extrapolated in order to make generalizations about the larger population. However, focus group findings can be used to support testimonials and other public relations information about the quality of the product.

Conducting stated preference surveys is a quantitative research method that records the respondent's subjective assessment. When instituted correctly, data from stated preference surveys can support statistically valid generalizations about the reaction of various types of users to various types of situations. They also provide a basis from which to predict how different types of users will behave under various conditions. Stated preference survey results are not suitable for an objective assessment of measures such as actual amount of time saved or miles traveled. For these measures, revealed preference survey techniques should be used.

Conducting revealed preference surveys is a quantitative research method in which the user is observed and his or her actions are recorded. An example of revealed preference survey techniques would be counting the number of hits to a real-time traffic information Web page or number of calls to a telephone information service. Another example is counting the number of drivers who select a certain route or exit before and after the notification of an incident ahead by means of a variable message sign. These surveys provide an objective measure of traveler behavior, and are thus more reliable than subjective, stated preference surveys. Revealed preference surveys, however, cannot provide explanations or motivations for user actions. For example, it is impossible to know why a user accessed a real-time traffic information Web page using only this survey technique.

In addition to the traveling public, another user of Intelligent Transportation Systems is the transportation system provider or manager. For example, many ITS projects are implemented to help better coordinate among various stakeholders in a regional transportation network. In such projects, it is necessary to measure the satisfaction of the transportation providers to ensure the best use of limited funding.

One way to measure the performance of this type of ITS project is to survey transportation providers before and after a project is implemented to determine if coordination was actually improved. It may also be possible to convene representatives from each of the stakeholder groups to evaluate their satisfaction with the system before and after implementation. This type of approach helps to identify areas of interest to all stakeholders and can be used to refine the system before and after installation. Examples of questions to be asked of transportation providers include:

  • Is the additional information resulting from the project helpful in executing your responsibilities?
  • What is your overall impression of the project's impact on your operations?
  • Has the project improved the working relationship between the involved agencies?

The U.S. Department of Transportation has issued a primer and a practical guide on measuring customer satisfaction with ITS products and services.

Productivity

The measure of effectiveness used to quantify improvements in productivity is:

  • Cost savings

There are two ways to calculate the costs savings of Intelligent Transportation Systems. One way is to calculate the difference in costs before and after installation of a system. Another way is to compare the cost of an Intelligent Transportation System to traditional transportation improvements that are designed to address the same problem. An example of the latter would be comparing the cost of a ramp-metering program versus constructing more lane miles of freeway, with both options designed to improve the level of service of a road network to the same degree. There are several component elements that constitute the cost of an ITS application or any transportation improvement. These component elements include the acquisition cost (capital cost), operating/maintenance cost, and income in the case of revenue-generating transportation facilities such as transit systems.

To address the need for detailed data on costs of Intelligent Transportation Systems, the SAFETEA-LU Reporting and Evaluation Guidelines encourage the appropriate JPO program manager and evaluation managers to collect and report cost data during the conduct of the project or initiative.

Energy and Environment

The measures of effectiveness used to quantify improvements in energy and the environment are:

  • Reduction in emissions
  • Reduction in fuel consumption

The air quality and energy impacts of ITS services are very important considerations, particularly for metropolitan areas that have not attained air quality standards established by the Clean Air Act Amendments of 1990 ("non-attainment areas"). In most cases, environmental benefits from a given project can only be estimated by analysis and simulation. There are many challenges to evaluating the environmental impact of ITS projects. Frequently, the impact of individual ITS projects is very small, compared to the environmental conditions of the larger geographic region. In addition, there are many external variables influencing environmental conditions, such as weather conditions, pollutants emitted by non-mobile sources (i.e. not vehicles), and even pollutants generated in other metropolitan areas and carried to the study area by certain weather conditions. Small-scale studies conducted to date reveal that ITS applications have a positive impact on the environment. However, the environmental impacts of travelers reacting to large-scale ITS deployment in the long term are not well understood.

To assess the air quality impact of transportation improvements, the evaluator measures pollutants that are typically emitted by vehicles, called "mobile source emissions." These pollutants are carbon monoxide (CO), nitrogen oxides (NOx), and volatile organic compounds (VOC's) such as hydrocarbons (HC). In areas with little wind movement, hydrocarbons and nitrogen oxides mix with sunlight to form surface level ozone, also called "ground level ozone" or "smog".

Typically, the most efficient way to assess the change in emission levels and energy consumption before and after the implementation of an ITS project would be to apply a simulation model to estimate the resulting changes in these measures.

Appendix B

Project Name Project Type Evaluation Strategy Evaluation Plan Individual Test Plans Final Report

Twin Cities Ramp Meter Evaluation

State/Local-Funded Study

 

Twin Cities Ramp Meter Evaluation: Evaluation Plan (EDL#13423)

 

Twin Cities Ramp Meter Evaluation
- Executive Summary (EDL# 13424)
- Full Report (EDL# 13425)

I-95 FleetForward Evaluation

Deployment Evaluation

Strategic Plan for the FleetForward Evaluation (EDL# 9263)

 

 

FleetForward Evaluation Final Report (EDL# 13283)

TravTips

Deployment Evaluation

 

Evaluation Plan for the I-95 Corridor Coalition ATIS (Corridor TravTips) Program (EDL# 9184)

 

I-95 Corridor Coalition Evaluation of the Advanced Traveler Information System Field Operational Test (TravTips) (EDL# 13604)

Model Deployment of a Regional Multi-Modal 511 Traveler Information System

511 Model Deployment

 

Final Evaluation Plan: Model Deployment of a Regional Multi-Modal 511 Traveler Information System (EDL# 14014) (HTML)(Adobe Acrobat)

Model Deployment of a Regional Multi-Modal 511 Traveler Information System
- Key Informant Interviews Test Plan (EDL# 13974) (HTML)(Adobe Acrobat)
- System Usage Test Plan (EDL# 13975) (HTML)(Adobe Acrobat)

Final Report: Model Deployment of a Regional Multi-Modal 511 Traveler Information System (EDL# 14248)

Orlando Regional Alliance for Next Generation Electronic Payment Systems (ORANGES)

Field Operational Test

 

 

ORANGES Evaluation Test Plans: Test Plans for the US DOT sponsored Evaluation of the ORANGES Electronic Payment System Field Operational Test (EDL# 13771)(HTML)(Adobe Acrobat)
ORANGES Evaluation: Discussion Group Process (EDL# 13809)(HTML)(Adobe Acrobat)
ORANGES Evaluation Test Plans: Analysis of Before Data Collected for the US DOT Sponsored Evaluation of the ORANGES electronic Payment Systems Filed Operational Test (EDL# 13932)(HTML)(Adobe Acrobat)

Final Report: ORANGES Electronic Payment Systems Field Operational Test Evaluation (EDL# 14268)

Appendices: (EDL# 14269)

Utah Transit Authority Connection Protection System

Deployment Evaluation

 

Final Evaluation Plan: Utah Transit Authority Connection Protection System (EDL# 13896)( HTML)(Adobe Acrobat)

Final Detailed Test Plans: Evaluation of Utah Transit Authority Connection Protection System (EDL# 13939)(HTML)( Adobe Acrobat)

Final Project Report: Evaluation of Utah Transit Authority Connection Protection System (EDL# 14074)

Central Puget Sound Regional Fare Coordination "Smart Card" Project

Earmark National Evaluation

Evaluation Strategy - Puget Sound Regional Fare Card: FY01 Earmark Evaluation (EDL# 13856)( HTML)(Adobe Acrobat)

 

 

Final Report: Evaluation of the Central Puget Sound Regional Fare Coordination Project (EDL# 14300)

Capital Wireless Integrated Network (CapWIN)

Earmark National Evaluation

FY00 Integration Earmarks – National Evaluation Program CapWIN: The Capital Wireless Integrated Network Evaluation Strategy (EDL# 13696)(HTML)(Adobe Acrobat)

 

 

Evaluation still on-going. Final report not yet available.

Computer-Aided Dispatch/Traffic Management Center: WSDOT Deployment

Field Operational Test

 

Computer-Aided Dispatch – Traffic Management Center Field Operational Test Final Evaluation Plan: WSDOT Deployment ( EDL# 13969)

Computer-Aided Dispatch – Traffic Management Center Field Operational Test Final Detailed Test Plan: WSDOT Deployment ( EDL# 13970)

Computer-Aided Dispatch – Traffic Management Center Field Operational Test Final Report: Washington State (EDL# 14325)

Computer-Aided Dispatch/Traffic Management Center: State of Utah

Field Operational Test

 

Computer-Aided Dispatch – Traffic Management Center Field Operational Test Final Evaluation Plan: State of Utah ( EDL# 13971)

Computer-Aided Dispatch – Traffic Management Center Field Operational Test Final Test Plans: State of Utah ( EDL# 13990)

Computer-Aided Dispatch – Traffic Management Center Field Operational Test Final Report: Utah ( EDL# 14324)

Delaware ITMS Integration

Earmark National Evaluation

Delaware Statewide ITMS Integration – ITS Evaluation Strategy (EDL# 13690)( HTML)( Adobe Acrobat)

 

 

An Evaluation of Delaware's DelTracProgram - Building Integrated Transportation Management System (EDL# 14019)( HTML)( Adobe Acrobat)

Metropolitan Model Deployment Initiative (MMDI) Evaluation

Deployment Evaluation

Metropolitan Model Deployment Initiative: National Evaluation Strategy (EDL# 6305)

 

 

Deploying and Operating Integrated Intelligent Transportation Systems: Twenty Questions and Answers (EDL# 13599)

FORETELL

Field Operational Test

Evaluation of the FORETELL Consortium Operational Test: Weather Information for Surface Transportation: Evaluation Strategy ( EDL# 10143)

 

Final Test Plans: FORETELL™ Consortium Operational Test: Weather Information for Surface Transportation (EDL# 13878)( HTML)( Adobe Acrobat)

Final Report of the FORETELL Consortium Operational Test: Weather Information for Surface Transportation (EDL# 13833)( HTML)( Adobe Acrobat)

Acadia National Park

Field Operational Test

Acadia National Park ITS Field Operational Test Evaluation Strategic Plan ( EDL# 13196)

Acadia National Park ITS Field Operational Test Evaluation Plan ( EDL# 13195)

 

Acadia National Park ITS Field Operational Test
- Final Report (EDL #13834)( HTML)( Adobe Acrobat)
- Visitor Survey (EDL# 13806)( HTML)( Adobe Acrobat)
- State of Maine Data Analysis (EDL# 13860)( HTML)( Adobe Acrobat)
- Acadia National Park Data Analysis (EDL# 13861)( HTML)( Adobe Acrobat)
- Island Explorer Data Analysis (EDL# 13862)( HTML)( Adobe Acrobat)
- Key Informant Interviews (EDL# 13863)( HTML)( Adobe Acrobat)
- Business Survey (EDL# 13864)( HTML)( Adobe Acrobat)
- Parking Report (EDL# 13865)( HTML)( Adobe Acrobat)

Arizona I-40 Traveler and Tourist Information

Field Operational Test

 

Evaluation Plan: The I-40 Traveler and Tourist Information System Field Operational Test ( EDL# 6084)

Test Plans: I-40
- Focus Groups and Personal Interviews ( EDL# 6107)
- Route Diversion ( EDL# 6108)
- System/Historical Data Analysis ( EDL# 6109)
- Tourist Intercept Survey (EDL# 6110)

Advanced Traveler Information Services in Rural Tourism Areas: Branson Travel and Recreational Information Program (Missouri) and Interstate 40 Traveler and Tourist Information System (Arizona)
- Final Report ( EDL#13070)
- Appendix A: Tourist Intercept Surveys ( EDL# 13073)
- Appendix B: Qualitative Interviews and Focus Groups ( EDL# 13075)
- Appendix C: Observations of Tourist Interactions with Kiosks ( EDL# 13076)

Branson, Missouri TRIP

Field Operational Test

 

Evaluation Plan: The Branson Travel and Recreational Information Program Field Operational Test ( EDL# 6083)

Test Plans: Branson TRIP
- Focus Group and Personal Interviews ( EDL#6103)
- System/ Historical Data Analysis ( EDL# 6104)
- Tourism Intercept Survey ( EDL#6105)
- Travel Time/Data Accuracy Test ( EDL#6106)

Advanced Traveler Information Services in Rural Tourism Areas: Branson Travel and Recreational Information Program (Missouri) and Interstate 40 Traveler and Tourist Information System (Arizona)
- Final Report ( EDL#13070)
- Appendix A: Tourist Intercept Surveys ( EDL# 13073)
- Appendix B: Qualitative Interviews and Focus Groups ( EDL# 13075)
- Appendix C: Observations of Tourist Interactions with Kiosks ( EDL# 13076)

Greater Yellowstone Regional Traveler and Weather Information System

Earmark National Evaluation

 

Greater Yellowstone Regional Traveler and Weather Information System Evaluation Plan (EDL# 13658) ( HTML) ( Adobe Acrobat)

 

Final Evaluation Report for the Greater Yellowstone Regional Traveler and Weather Information System (GYRTWIS)(EDL# 13958) ( HTML)( Adobe Acrobat)

Cape Cod Rural Advanced Intermodal System

Field Operational Test


Evaluation Plan for the Cape Cod Advanced Public Transportation System ( EDL#13079)

 

Evaluation of the Cape Cod Advanced Public Transit
System Phase 1 and 2:
Final Report (EDL# 14096)

Northeast Florida Rural Transit ITS

Field Operational Test

 

Northeast Florida Rural Transit ITS Evaluation Plan (EDL#13654) ( HTML) ( Adobe Acrobat)

 

Northeast Florida Rural Transit Intelligent Transportation System: Final Report (EDL# 13848)( HTML)( Adobe Acrobat)

I-95 Corridor Coalition Field Operational Test 8

Deployment Evaluation

Evaluation Strategy for the I-95 Corridor Coalition Electronic Credentialing Program ( EDL# 9183)

 

 

I-95 Corridor Coalition - Evaluation of Field Operations Test 8: Electronic Credentialing
- New York State Proof-of-Concept One Stop Credentialing and Registration (EDL# 13595) (HTML)( Adobe Acrobat)
- Results of New York State Motor Carrier Survey (EDL# 13484)( HTML)( Adobe Acrobat)

Commercial Vehicle Information Systems and Networks (CVISN) Model Deployment

Deployment Evaluation

 

CVISN Model Deployment Initiative Summary Evaluation Plan
- Executive Summary ( EDL# 6003)
- Evaluation Plan ( EDL# 6004)

 

Evaluation of the Commercial Vehicle Information Systems and Networks (CVISN) Model Deployment Initiative
- Volume I: Final Report ( EDL# 13677)
- Volume II: Appendices ( EDL# 13699)

WSDOT Intermodal Data Linkages

Field Operational Test

 

WSDOT Intermodal Data Linkages: ITS Field Operational Test Evaluation Plan ( EDL# 13475)

 

WSDOT Intermodal Data Linkages Freight ITS Operational Test Evaluation Final Report
- Part 1: Electronic Container Seals Evaluation (EDL# 13770)( HTML)( Adobe Acrobat)
- Part 2: Freight ITS Traffic Data Evaluation(EDL# 13781)( HTML) ( Adobe Acrobat)

Electronic Intermodal Supply Chain Manifest

Field Operational Test

 

Electronic Intermodal Supply Chain Manifest: ITS Field Operational Test Evaluation Plan ( EDL# 13474)

 

Electronic Intermodal Supply Chain Manifest: Freight ITS Operational Test Evaluation Final Report (EDL# 13769)( HTML)( Adobe Acrobat)

Freight Information Real-Time System for Transport (FIRST)

Deployment Evaluation

 

A Combined Report for Freight Information Real-Time System for Transport (FIRST); Part A – Final Evaluation Plan; Part B – Detailed Test Plans (EDL# 13671) (HTML) (Adobe Acrobat)

A Combined Report for Freight Information Real-Time System for Transport (FIRST); Part A – Final Evaluation Plan; Part B – Detailed Test Plans (EDL# 13671) (HTML) (Adobe Acrobat)

Freight Information Real-Time System for Transport (FIRST): Evaluation Final Report (EDL# 13951)(HTML)(Adobe Acrobat)

Hazardous Material Transportation Safety and Security

Field Operational Test

 

Hazardous Material Transportation Safety and Security Field Operational Test Final Evaluation Plan: Executive Summary (EDL# 13844)(HTML)(Adobe Acrobat)

- Hazardous Material Transportation Safety and Security Field Operational Test Final Detailed Test Plans: Executive Summary (EDL# 13899)(HTML)(Adobe Acrobat)
- Hazardous Material Transportation Safety and Security Field Operational Test Beta Test and Baseline Data Report: Executive Summary (EDL# 13900)(HTML)(Adobe Acrobat)
- Hazardous Material Transportation Safety and Security Field Operational Test Public Sector Detailed Test Plans (EDL# 14020)

- Hazardous Material Transportation Safety and Security Field Operational Test Volume I: Evaluation Final Report Executive Summary (EDL# 14094)
- Hazardous Materials Safety and Security Technology Field Operational Test Volume II: Evaluation Final Report Synthesis (EDL# 14095)

I90/94 Fiber Backbone Network and Spurs Build-out (Phase II)

Earmark Self- Evaluation

 

 

 

I90/94 Fiber Backbone Network and Spurs Build-out (Phase II) Wisconsin Department of Transportation - Final Report and Local Evaluation (EDL# 14355)

Great Lakes ITS Case Study and Lessons Learned for the Airport ITS Integration Program and Road Infrastructure Management Systems Projects

Earmark National Evaluation

 

 

 

Great Lakes ITS Case Study and Lessons Learned for the Airport ITS Integration Program and Road Infrastructure Management Systems Projects - Final Report (EDL# 14368)