Various DOT photos. Other Accompanying Information

Performance Data Completeness and Reliability Details

Each table includes a description of a performance measure and associated data provided by the agencies in charge of the measure. The Scope statement gives an overview of the data collection strategy for the underlying data behind the performance measure. The Source statement identifies the data system(s) from which the data for each measure was taken. The Statistical Issues statement has comments, provided by the Bureau of Transportation Statistics (BTS) and the agency in charge of the measure, which discuss variability of the measure and other points. The Completeness statement indicates limitations due to missing data or availability of current measures, methods used to develop projections are also provided, as appropriate. The Reliability statement gives the reader a feel for how the performance data are used in program management decision making inside DOT.

For further information about the source and accuracy (S&A) of these data, and DOT's data quality guidelines in accordance with Section 515 of the Treasury and General Government Appropriations Act for Fiscal Year 2001 (P.L. 106-554), please refer to the BTS S&A compendium available at http://www.bts.gov/programs/statistical_policy_and_research/source_and_accuracy_compendium/index.html.

Details on DOT Safety Measures
Highway Fatality Rate


Measure
Highway fatalities per 100 million vehicle-miles traveled (VMT). Calendar Year (CY) 2007
Scope
The number of fatalities is a count of occupant and non-motorist deaths which occur within 30 days of a crash involving motor vehicle traffic traveling on a trafficway customarily open to the public within the 50 States and Washington, D.C.

VMT represent the total number of vehicle miles traveled by motor vehicles on public roadways within the 50 States and Washington, D.C.
Sources
Motor vehicle traffic fatality data are obtained from the National Highway Traffic Safety Administration's Fatality Analysis Reporting System (FARS). The FARS database is based on police crash reports and other State data.

VMT data for 2007 are estimated based on preliminary 2007 VMT data from FHWA's Traffic Volume Trends (TVT); a monthly report based on hourly traffic count data in the Highway Performance Monitoring System (HPMS).
Statistical
Issues
While based on historical data, the 2007 fatality rate projection is dependent on the continuation of both individual and market behavior regarding vehicle miles traveled, seat belt use and motorcycle rider and alcohol related fatalities. The assumptions inherent in these projections, together with the normal levels of uncertainty inherent in statistical evaluations, may influence the accuracy of the projection.
Completeness
FARS has been in use since 1975 and is generally accepted as a complete measure for describing safety on the Nation's highways. Total annual fatalities are available through CY 2006. The fatality projection used to calculate the 2007 rate shown in this report was estimated by modifying the 2006 fatality total for the subsequent phase-in of safety features in the on-road fleet, the scrapping of vehicles with existing safety features, a projected change in safety belt usage, a projected trend in motorcycle fatalities, and other safety-related considerations.
Reliability
The measure informs and guides NHTSA, FHWA, and FMCSA regarding highway safety policy, safety program planning, regulatory development, resource allocation, and operational mission performance, and tracks progress toward the goal of saving lives by preventing highway crashes.

Details on DOT Safety Measures
Large Truck-Related Fatalities


Measure
Fatalities involving large trucks per 100 million truck VMT. (CY)
Scope
The measure includes all fatalities associated with crashes involving trucks with a gross vehicle weight rating of 10,000 pounds or more. Truck Vehicle Miles of Travel (TVMT) represents the total number of vehicle miles traveled by large trucks on public roadways within the 50 States and the District of Columbia.
Sources
The number of fatalities comes from NHTSA's Fatality Analysis Reporting System (FARS) data, a census of fatal traffic crashes within the 50 States and the District of Columbia. The TVMT data are derived from the FHWA's Highway Performance Monitoring System (HPMS).
Statistical
Issues
The fatality counts in FARS are generally quite accurate. The major sources of error are under reporting by some precincts and inconsistent use of the definition of a truck.

Because the TVMT data provided to FHWA from each State are estimates based on a sample of road segments, the numbers have associated sampling errors. The methodology used by each of the States to estimate TVMT varies and may introduce additional non-sampling error. Although States provide TVMT estimates on an annual basis, they are only required to update their traffic counts at all sampling sites once every three years. Thus, a portion of each States' sample sites will report estimated traffic rather then actual traffic counts.
Completeness
The FARS has been in use since 1975 and is generally accepted as a complete measure for describing safety on the Nation's highways. Large truck-related fatality data are complete through 2006. For 2007, the FARS data for crashes involving large trucks are not available until October 2008. The value used for the 2007 rate is projected from recent trend data. The TVMT is complete through 2005. For 2006 and 2007, it is projected as a percentage of the total VMT projections. The final TVMT estimate for 2006 will be available in December 2007, and the final TVMT estimate for 2007 will be available in December 2008.
Reliability
The measure informs and guides FMCSA, NHTSA, and FHWA highway safety policy, safety program planning, regulatory development, resource allocation, and operational mission performance, and tracks progress toward the goal of saving lives by preventing large truck crashes.

Details on DOT Safety Measures
Commercial Air Carrier Fatal Accident Rate


Measure
U.S. commercial fatal aviation accidents per 100,000 departures (last three years' average). (FY)
Scope
This measure includes both scheduled and nonscheduled flights of large U.S. air carriers (14 CFR Part 121) and scheduled flights of regional operators (14 CFR Part 135). It excludes on -demand (i.e., air taxi) service and general aviation. Accidents involving passengers, crew, ground personnel, and the uninvolved public are all included.
Sources
Fatal aviation accidents: The data on commercial and general aviation fatalities come from the National Transportation Safety Board's (NTSB) Aviation Accident Database. Aviation accident investigators under the auspices of the NTSB develop the data.

Departures Performed: The Office of Airline Information (OAI) within the Bureau of Transportation Statistics (BTS) collects the data on Form 41, Schedule T-100-U.S. Air Carrier Traffic and Capacity Data By Nonstop Segment and On-flight Market and Form 41, Schedule T-100 (f)-Foreign Air Carrier Traffic and Capacity Data by Nonstop Segment and On-flight Market.
Statistical
Issues
The joint government/industry group working on improving the level of safety for U.S. commercial aviation has determined that the number of departures is a better denominator measure to use for determining accident rates and the Government Accountability Office recommended that FAA use departures.

Both accidents and departures are censuses, having no sampling error. However, missing data, particularly in the departure counts, will result in bias to some degree. The fatal accident rate is small and could significantly fluctuate from year to year due to a single accident. Use of an average over three years smooths the fluctuation that may occur in any given year.
Completeness
The FAA does comparison checking of the departure data collected by BTS. However, FAA has no independent data sources against which to validate the numbers submitted to BTS. FAA compares its list of carriers to the DOT list to validate completeness and places the carriers in the appropriate category (i.e., Part 121 or Part 135).

Actual departure data for any given period of time is considered preliminary for up to 12 months after the close of the reporting period. This is due to amended reports subsequently filed by the air carriers. However, the changes to departure data rarely have an effect on the annual fatal accident rate. NTSB and FAA's Office of Accident Investigation meet regularly to validate the accident count.

To overcome reporting delays of 60 to 90 days, FAA must rely on historical data, partial internal data sources, and Official Airline Guide (OAG) scheduling information to project at least part of the fiscal year activity data. FAA uses OAG data until official BTS data is available. The air carrier fatal accident rate is not considered reliable until BTS provides preliminary numbers. Due to reporting procedures in place, it is unlikely that calculation of future fiscal year departure data will be markedly improved. Lacking complete historical data on a monthly basis and independent sources of verification increases the risk of error in the activity data.
Reliability
Results are considered preliminary based on projected activity data. FAA uses performance data extensively for program management, personnel evaluation, and accountability. Most accident investigations are a joint undertaking. NTSB has the statutory responsibility, but, in fact, most of the accident investigations related to general aviation are conducted by FAA Aviation Safety Inspectors without NTSB direct involvement. FAA's own accident investigators and other FAA employees participate in all accident investigations led by NTSB investigators.

Details on DOT Safety Measures
General Aviation Fatal Accidents


Measure
Number of fatal general aviation accidents. (FY)
Scope
The measure includes on-demand (non-scheduled FAR Part 135) and general aviation flights. General aviation includes a diverse range of aviation activities. The range of general aviation aircraft includes single-seat homebuilt aircraft, helicopters, balloons, single and multiple engine land and seaplanes including highly sophisticated extended range turbojets.
Sources
The data on general aviation fatalities come from the National Transportation Safety Board's Aviation Accident Database (NTSB). Aviation accident investigators under the auspices of the NTSB develop the data.
Statistical
Issues
There is no major error in the accident counts. Random variation in air crashes results in a significant variation in the number of fatal accidents over time.
Completeness
NTSB and FAA's Office of Accident Investigations meet regularly to validate information on the number of accidents. Results are considered preliminary. NTSB continues to review accident results from FY 2006.

Numbers are final when the NTSB releases its report each March. NTSB continues to review accident results from FY 2006. So in March 2008, FY 2006 accident numbers will be finalized. However, the number is not likely to significantly change from the end of each fiscal year to when the rate is finalized.
Reliability
FAA uses performance data extensively for program management and personnel evaluation and accountability. Most accident investigations are a joint undertaking between FAA and NTSB. NTSB has the statutory responsibility, but, in fact, most of the accident investigations related to general aviation are conducted by FAA Aviation Safety Inspectors without NTSB direct involvement. FAA's own accident investigators and other FAA employees participate in all accident investigations led by NTSB investigators.

Details on DOT Safety Measures
Train Accidents Rate


Measure
Rail-related accidents and incidents per million train-miles (FY).
Scope
The Railroad Safety Information System (RSIS) is the principal monitoring strategy used by the FRA for the management, processing, and reporting on railroad-reported accidents/incidents; railroad inspections; highway-rail grade crossing data; and related railroad safety activities. The Railroad Accident/Incident Reporting Subsystem (RAIRS) is the repository of all FRA-mandated reports of railroad accidents, incidents, casualties, highway-rail grade crossing collisions, and operating information.

A train accident is any collision, derailment, fire, explosion, act of God, or other event involving the operation of railroad on-track equipment (standing and moving), which results in damages greater than the current reporting threshold to railroad on-track equipment, signals, track, track structures, and roadbed. Train accidents are reported on form FRA F6180.54, Rail Equipment Accident/Incident Report. The reporting threshold for 2007 is $8,200.

A train incident is any event involving the movement of on-track equipment that results in a reportable casualty but does not cause reportable damage above the current threshold established for train accidents. Operational data, including train-miles, are reported on the form FRA F6180.55, Railroad Injury and Illness Summary.
Sources
FRA's Railroad Accident/Incident Reporting Subsystem.
Statistical
Issues
None.
 
Completeness
Railroads are required by regulation (49 CFR Part 225) to file monthly reports to the FRA of all train accidents that meet a dollar threshold (currently $8,200). They are also required to file monthly operations reports of train-miles, employee-hours, and passenger train-miles.

Reports must be filed within 30 days after the close of the month. Data must be updated when the costs associated with an accident vary by more than 10 percent (higher or lower) from that initially reported.

Railroad systems that do not connect with the general rail system are excluded from reporting to FRA. Examples include subway systems (e.g., Washington, D.C. Metro, New York City subway, San Francisco Bay Area Rapid Transit District), track existing inside an industrial compound, and insular rail (e.g., rail that is not connected to the general system and does not have a public highway rail crossing or go over a navigable waterway).
Reliability
FRA uses the data in prioritizing its inspections and safety reviews, and for more long-term strategic management of its rail safety program.

FRA has inspectors who review the railroads' reporting records, and who have the authority to write violations if railroads are not reporting accurately. Violations may result in monetary fines.

Details on DOT Safety Measures
Transit Fatality Rate


Measure
Transit fatalities per 100 million passenger-miles traveled. (CY)
Scope
Transit fatality data includes passengers, revenue facility occupants, trespassers, employees, other transit workers (contractors), and others. A transit fatality is a death within 30 days after the incident, which occurs under the categories of collision, derailment, personal casualty (not otherwise classified), fire, or bus going off the road in the National Transit Database (NTD) reporting systems. Previous to 2002, transit involved parties that were defined as patrons, employees, and others (the safety data was collected on a fiscal year, as opposed calendar year basis). Fatalities for the performance measurement only use transit agency Directly Operated (DO) mode data. Purchased Transportation (PT) data are not part of this measure. Certain fatalities are excluded, as they are not considered to be directly related to the operation of transit vehicles. Those include suicides and fatalities occurring in parking facilities and stations, as well as fires in right-of-ways and stations. Also, the measure includes only the major transit modes (motor/trolleybus, light rail, heavy rail, commuter rail with vanpool, automated guideway, and demand response) and excludes ferryboat, monorail, inclined plane, cable car, and jitney.

The passenger-miles traveled on public transit vehicles (e.g., buses, heavy and light railcars, commuter railcars, ferries, paratransit vans, and vanpools) only refer to miles while in actual revenue service to the general public.

These data are reported annually by operators to the FTA National Transit Database (NTD) and to the Federal Railroad Administration's (FRA) Rail Accident and Incident Reporting System (RAIRS). FRA RAIRS data are used exclusively for commuter rail (CR) safety data. NTD and RAIRS data are an input to FTA's Transit Safety and Security Statistics and Analysis program (formerly known as Safety Management Information Statistics [SAMIS]).
Sources
The Transit Safety and Security Statistics and Analysis Annual Report, formerly SAMIS, is a compilation and analysis of transit accident, casualty, and crime statistics reported under the Federal Transit Administration's (FTA's) NTD Reporting System by transit systems that are beneficiaries of FTA Urbanized Area Formula funds. (Section 5307 grantees). Starting in 2002, commuter rail safety data are being collected from the FRA Rail Accident Reporting System (RAIRS) in order to avoid redundant reporting to NTD. Transit fatalities: Transit Safety and Security Statistics and Analysis Annual Report. Transit passenger miles: Transit Safety and Security Statistics and Analysis Annual Report.
Statistical
Issues
The fatality counts in FTA's Transit Safety and Security Statistics and Analysis are a census. The major source of uncertainty in the measure relates to passenger-miles traveled. Passenger-miles are an estimate derived from reported passenger trips and average trip length. Passenger-miles are the cumulative sum of the distances ridden on passenger trips.

Transit authorities have accurate counts of unlinked passenger trips and fares. An unlinked trip is recorded each time a passenger boards a transit vehicle, even though the rider may be on the same journey. Transit authorities do not routinely record trip length. To calculate passenger-miles, total unlinked trips are multiplied by average trip length. To obtain an average trip length for their bus routes, transit authorities use Automatic Passenger Counters (APC's) with GPS Technology or a FTA-approved sampling technique. To obtain passenger mile data on rail systems, ferry boats, and paratransit, transit authorities often use Smart Card or other computerized tracking systems. Passenger-miles are the only data element that is sampled in the NTD. Validation based on annual trend analysis is performed on the passenger mile inputs from the transit industry. The validation is performed by statistical analysts at the NTD contractor (Technology Solution Providers/General Dynamics Corporation).
Completeness
The information for this measure comes from the FTA's Transit Safety and Security Statistics and Analysis program, formerly FTA's Safety Management Information System (SAMIS), which uses data reported by transit operators to the NTD. Many categories and definitions were added or changed in the new NTD in 2002, and have allowed for improvements and more timely analysis of trends and contributing factors. The 2007 measure is an extrapolation of partial-year data, particularly of passenger-miles traveled.
Reliability
An independent auditor and the transit agency's CEO certify that data reported to the NTD are accurate. Using data from the NTD to compile the Transit Safety & Security Statistics & Analysis program (formerly SAMIS) data, the USDOT Volpe National Transportation Systems Center compares current safety statistics with previous years, identifies any questionable trends, and seeks explanation from operators.

Details on DOT Safety Measures
Natural Gas and Hazardous Liquid Pipeline Incidents


Measure
Number of natural gas pipeline incidents and hazardous liquid pipeline accidents. (CY)
Scope
Gas pipeline incidents are reportable under 49 CFR 191.15 if they involve: Liquid pipeline accidents are reportable under 49 CFR 195.50 if there is a release of hazardous liquid or carbon dioxide and any one of the following: Gas incidents include both gas transmission and gas distribution pipeline systems. Data are adjusted/normalized for time series comparisons to account for changes in reporting criteria over time. This includes screening out hazardous liquid spills of less than 50 barrels (or five barrels for highly-volatile liquids) unless the accident meets one of the other reporting criteria.
Sources
DOT/Pipeline and Hazardous Materials Safety Administration (PHMSA) Incident Data - derived from Pipeline Operator reports submitted on PHMSA Form F-7100.1 and F-7000.1.
Statistical
Issues
A response percentage cannot be calculated as the actual population of reportable incidents cannot be precisely determined. Results in any single year need to be interpreted with some caution. Targets could be missed or met as a result of normal annual variation in the number of reported incidents.
Completeness
Compliance in reporting is very high and most incidents that meet reporting requirements are submitted. Operators must submit reports within 30 days of an incident or face penalties for non-compliance. The reported estimates are based upon incident data reported in January through June 2007. There may be a 60-day lag in reporting and compiling information in the database for analysis. Traditionally, there are more incidents in the summer than the winter. Preliminary estimates are based on data available as of middle of August, with six months of data through the end of June. The CY 2007 estimate is a projection using both a seasonal adjustment (using a 10-year baseline) and a separate adjustment to account for the historical filing of late reports (92.5 percent of reports for January - June were filed by this time last year).
Reliability
PHMSA routinely cross-checks incident/accident reports against other sources of data, such as the telephonic reporting system for incidents requiring immediate notification provided to the National Response Center (NRC). PHMSA is developing a Best Management Practice to ensure quality of the incident data. Data are not normalized to account for inflation. A fixed reporting threshold ($50,000) for property damage results in an increasing level of reporting over time. This threshold was set for gas pipeline incidents in 1985 and for hazardous liquid accidents in 1994.

Data are not normalized to account for the subjective judgment of the operator in filing reports for incidents that do no meet any of the quantitative reporting criteria. This may result in variations over time due to changes in industry reporting practices. The performance measure is not normalized for changes in exposure-external factors like changes in pipeline mileage that could affect the number of incidents without affecting the risk per mile of pipeline.

PHMSA uses these data in prioritizing its inspections and safety reviews, and for more long-term strategic management of its pipeline safety program.

Details on DOT Safety Measures
Serious Hazardous Materials Incidents


Measure
Number of serious hazardous materials transportation incidents. (CY)
Scope
Hazardous materials transportation incidents are reportable under 49 CFR Parts 100-185. Serious hazardous materials incidents include those incidents resulting in: This measure tracks only transportation-related releases of hazardous materials that are in commerce. It includes incidents in all modes of transportation (air, truck, rail, and water) except pipelines.
Sources
Hazardous Material Information System (HMIS) maintained by DOT/Pipeline and Hazardous Materials Safety Administration-derived from reports submitted on Form DOT F 5800.1.
Statistical
Issues
A response percentage cannot be calculated as the actual population of reportable incidents cannot be precisely determined. Results in any single year need to be interpreted with some caution. Targets could be missed or met as a result of normal variation in the number of reported incidents.
Completeness
Each person in physical possession of a hazardous material at the time that any of the incidents occurs (loading, unloading, and temporary storage) during transportation must submit a Hazardous Materials Incident Report on DOT Form F 5800.1 (01-2004) within 30 days of discovery of the incident. Incident reports are received continuously by PHMSA.

Carriers are required to submit incident reports to PHMSA within 30 days of an incident. Once received by PHMSA, it takes approximately one month for incident reports to be processed and verified. The data are then made available in the HMIS database during the next monthly update.

PHMSA continues to receive reports from calendar year 2007. By the end of September 2007 actual incident data was received through August 31, 2007. PHMSA is projecting the remainder of the calendar year using the actual number of incidents that occurred during September, October, November, and December of 2006-the previous calendar year. This methodology for projecting the CY 2007 estimate is expected to be within 2-4 percent of the final estimate, which becomes available during the second quarter of CY 2007.
Reliability
PHMSA routinely cross-checks incident data against other sources of data, including the use of a news clipping service to provide information on significant hazmat incidents that might not be reported. The performance measure is not normalized for changes in exposure - external factors like changes in the amount of hazmat shipped that could affect the number of incidents without affecting the risk per ton shipped.

Annual hazmat incident data are used to track program performance, plan regulatory and outreach initiatives, and provide a statistical basis for research and analysis. The data is also used on a daily basis to target entities for enforcement efforts, and review of applications for exemption renewals.

Details on DOT Mobility Measures
Highway Infrastructure Condition


Measure
Percent of travel on the National Highway System (NHS) meeting pavement performance standards for good rated ride. (CY)
Scope
Data include vehicle-miles traveled on the Highway Performance Monitoring System (HPMS) reported NHS sections and pavement ride quality data reported using the International Roughness Index (IRI). IRI is a quantitative measure of the accumulated response of a quarter-car vehicle suspension experienced while traveling over a pavement. An IRI of 95 inches per mile or less is necessary for a good rated ride. Vehicle-Miles of Travel (VMT) represents the total number of vehicle-miles traveled by motor vehicles on public roadways within the 50 States, Washington, D.C., and Puerto Rico.
Sources
Data for this measure are collected by the State Highway Agencies using calibrated measurement devices that meet industry set standards and reported to FHWA. Measurement procedures are included in the FHWA HPMS Field Manual. The VMT data are derived from the HPMS.
Statistical
Issues
The major source of error in the percentages is the differences in data collection methodologies between the States and the differences in data collection intervals. FHWA is working on revisions to the HPMS data collection guidelines to minimize these potential errors. VMT data are also subject to sampling errors. The magnitude of error depends on how well the sites of the continuous counting stations represent nationwide traffic rates. HPMS is also subject to estimation differences between the States, even though FHWA works to minimize such differences and differing projections on growth, population, and economic conditions that impact driving behavior.
Completeness
The 2007 actual results for this measure are reported based on 2006 data, which may be incomplete as late as October 2007. Prior to 2006, actual results were reported in the prior year and a projection for the current year was made based on the prior year data.
Reliability
The HPMS data are collected by the 50 States, the District of Columbia, and Puerto Rico in cooperation with local governments. While many of the geometric data items, such as type of median, rarely change; other items, such as traffic volume, change yearly. Typically, the States maintain data inventories that are the repositories of a wide variety of data. The HPMS data items are simply extracted from these inventories, although some data are collected just to meet Agency requirements.

The FHWA provides guidelines for data collection in the HPMS Field Manual. Adherence to these guidelines varies by State, depending on issues such as staff, resources, internal policies, and uses of the data at the data provider level. An annual review of reported data is conducted by the FHWA, both at the headquarters level and in the Division Offices in each State. The reported data are subjected to intense editing and comparison with previously reported data and reasonability checks. A written annual evaluation is provided to each State to document potential problems and to encourage corrective actions. Data re-submittal is requested in cases where major problems are identified.

Details on DOT Mobility Measures
Highway Congestion


Measure
Percent of total annual urban-area travel occurring in congested conditions. (CY)
Scope
Data are derived from approximately 400 urban areas. The data reflects travel conditions on freeway and principal arterial street networks.

Definitions:
Urban area - Developed area with a density of greater than 1,000 persons per square mile.
Congested travel - Traveling below the posted speed limit(s).
Sources
Data collected and provided by the State Departments of Transportation from existing State or local government databases, including those of Metropolitan Planning Organizations. FHWA's Highway Performance Monitoring System (HPMS) serves as the repository of the data. The Texas Transportation Institute utilizes HPMS data to derive the above measures.
Statistical
Issues
The methodology used to calculate performance measures has been developed by the Texas Transportation Institute (TTI) and reported in their annual Mobility Study. A detailed description the of TTI's methodology is available at: http://mobility.tamu.edu/ums/report/methodology.stm.

With sponsorship from the National Cooperative Highway Research Program of the Transportation Research Board, the methodology was significantly revised in 2006 and 2007 to take advantage of new studies and detailed data sources that have not been available in previous studies.
Completeness
The 2005 and prior measures are final. The 2006 measure is preliminary, as partial 2006 HPMS data were used to construct the estimates. HPMS data is compiled from the States and verified approximately 10 months from the base year, e.g., 2007 actual numbers will not be available from HPMS until October 2008. The 2007 measure is a projection based on recent year trends.
Reliability
The HPMS data are collected by the 50 States, the District of Columbia, and Puerto Rico in cooperation with local governments. While many of the geometric data items, such as type of median, rarely change; other items, such as traffic volume, change yearly. Typically, the States maintain data inventories that are the repositories of a wide variety of data. The HPMS data items are simply extracted from these inventories, although some data are collected just to meet Agency requirements. The FHWA provides guidelines for data collection in the HPMS Field Manual. Adherence to these guidelines varies by State, depending on issues such as staff, resources, internal policies, and uses of the data at the data provider level.

An annual review of reported data is conducted by the FHWA, both at the headquarters level and in the Division Offices in each State. The reported data are subjected to intense editing and comparison with previously-reported data and reasonability checks. A written annual evaluation is provided to each State to document potential problems and to encourage corrective actions. Data re-submittal is requested in cases where major problems are identified.

Details on DOT Mobility Measures
Transit Ridership


Measure
Average percent change in transit boardings per transit market (150 largest transit agencies). (CY)
Scope
The metric is the average percent change in transit boardings. The component is transit passenger boardings within a transit market. The modes covered are: Motor Bus (MB), Heavy Rail (HR), Light Rail (LR), Commuter Rail (CR), Demand Response (DR), Vanpool (VP), and Automated Guideway (AG).
Sources
Transit Passengers: Data derived from counts made on bus and rail routes by transit agencies that are beneficiaries of FTA Urbanized Area Formula funds, as part of their monthly National Transit Database (NTD) Reporting System submissions. Data are collected from the 150 largest transit systems.
Statistical
Issues
The sources of uncertainty include coverage errors and auditing issues. These data are validated by the FTA Office of Budget and Policy, contractor staff.

By statute, every FTA formula grant recipient in an urbanized area (defined by the Census as having a population of 50,000 or more) must report to the National Transit Database (NTD). In cities of this size, virtually every transit authority receives FTA funding, and there are only a few cities with over 50,000 persons that do not provide public transit service. Publicly-funded transit service can be directly-operated or purchased transportation.

Transit authorities have accurate counts of unlinked passenger trips and fares. An unlinked trip is recorded each time a passenger boards a transit vehicle, even though the rider may be on the same journey. As a check, trips are routinely reconciled against fare revenues. The sources of uncertainty include coverage errors and auditing issues. Until 2002, reports were required only on an annual basis.
Completeness
DOT has revised this measure to better account for the impact of ridership by counting actual monthly boardings.
Reliability
For 2007, the indicator compares transit ridership for the urbanized areas containing the 150 largest transit agencies, aggregated by mode, with the year ending June 30, 2007. An independent auditor and the transit agency's CEO certify that annual data reported to the NTD are accurate. FTA also compares data to key indicators such as vehicle revenue-miles, number of buses in service during peak periods, etc.

FTA has undertaken a major initiative to increase ridership nationwide with the planned results being a reduction in congestion. This measure is built into all FTA senior executive performance standards.

Details on DOT Mobility Measures
Transportation Accessibility


Measures
  1. Percentage of bus fleets compliant with the Americans with Disabilities Act (ADA). (CY)
  2. Percent of key rail stations compliant with the Americans with Disabilities Act (ADA). (CY)
Scope
Accessibility for bus fleet means that vehicles are equipped with wheelchair lifts or ramps.

Transit buses are buses used in urbanized areas to provide public transit service to the general public. Transit buses do not include private intercity buses (e.g., Greyhound), private shuttle buses, charter buses, or school buses.

The percentage of bus fleets that are equipped with lifts or ramps is only a partial measure of overall accessibility under the ADA as it measures only the availability of transit buses in our National fleet that can accommodate wheelchairs through the use of mechanical lifts or ramps. Accessibility for transit vehicles under the ADA includes other equipment and operational practices that are not reflected in this indicator.

Accessibility for key rail facilities is determined by standards for ADA compliance. Transit systems were required to identify key stations. A key station is one designated as such by public entities that operate existing commuter, light, or rapid rail systems. Each public entity has determined which stations on its system have been designated as key stations through its planning and public participation process using criteria established by DOT regulations.

All new rail stations are required to be ADA compliant upon completion and must meet standards for new rail stations, not key stations. All altered stations are required to be ADA compliant upon completion and must meet standards for alterations of transportation facilities by public entities.
Sources
Compliant bus fleets: National Transit Database (NTD).

Compliant rail stations: Rail Station status reports to the FTA.
Statistical
Issues
Data are obtained from a census of publicly-funded transit buses in urbanized areas. Information on the ADA key rail stations is reported to FTA by transit authorities. These data are not based on a sample.
Completeness
At a transit authority, vehicle purchases are significant capital expenditures. Vehicles purchased with FTA funds must have a useful life of 12 years. Whether a bus is purchased or leased, the equipment on the bus is recorded, including lifts and ramps. For the last 20 years, transit agencies have reported on the equipment in their bus fleets to the FTA in their annual NTD submissions. There is a census of publicly-funded transit buses in urbanized areas. It is not a sample. Urbanized areas have more than 50,000 persons, and are defined by the Census Department. By statute, every FTA formula grant recipient in an urbanized area must report to the NTD. In cities of this size, virtually every transit authority receives FTA funding. There are only a few cities of over 50,000 persons that do not provide public transit service. Publicly funded transit service can be directly operated or purchased transportation.

Data reported for key station accessibility have historically excluded those stations for which time extensions had been granted under 49 CFR 37.47(c) (2) or 37.51(c) (2). There are a total of 138 such stations for which time extensions of various lengths were granted, some of them through 2020, the maximum permitted. These deadlines are now beginning to pass, and these stations can no longer be excluded from the total key station accessibility figures; the total number of time extensions from 2007 through 2020 stands at 19. The total number of key stations will therefore increase, and the percentage of compliant stations may decrease as they are added to the total key station count. Beginning in 2007, the key station accessibility figures began reporting the total number of key stations, the total number that are accessible, and the number with outstanding time extensions.
Reliability
All data in the NTD are self-reported by the transit industry. The transit agency's Chief Executive Officer and an independent auditor for the transit agency certify the accuracy of this self-reported data. The data are also compared with fleet data reported in previous years and crosschecked with other related operating and financial data in the report. Fleet inventory is also reviewed as part of FTA's Triennial Review, and a visual inspection is made at that time.

Information on ADA key rail stations is reported to FTA by transit authorities. The FTA's Office of Civil Rights conducts oversight assessments to verify the information on key rail station accessibility. Quarterly rail station status reports and key rail station assessments have significantly increased the number of key rail stations that have come into compliance over the last several years.

FTA will primarily influence the goal through Federal transit infrastructure investment, which speeds the rate at which transit operators can transition to ADA-compliant facilities and equipment, oversight, and technical assistance.

Details on DOT Mobility Measures
Access to Jobs


Measure
Number of employment sites (in thousands) that are made accessible by Job Access and Reverse Commute (JARC) transportation services. (FY)
Scope
This measure assesses one part of the JARC program-the numbers of employment sites made accessible that were not previously accessible. The new employment sites represented new sites connected geographically by the new service or new employment sites reached during time periods not previously covered (late night and weekend service).

An employment site is a new stop reaching employers not previously reached either directly by demand responsive services or that are within ¼ mile of the new service stop for fixed route service. Services that make an employment site accessible may include, but are not limited to, carpools, vanpools, and other demand-responsive services as well as traditional bus and rail public transit. This measure does not account for those JARC activities that encourage riders to use already existing sources of public transit.
Sources
FTA Grantees.
Statistical
Issues
In previous years, FTA has had difficulty in getting complete information from its grantees. Changes resulting from a FTA analysis of this issue have improved grantee reporting compliance to 90 percent of those JARC grantees expected to report.
Completeness
JARC grantees are requested to report the new employment sites reached by the transportation services initiated under their grant. Approximately 90 percent of the JARC grantees have reported this data for FY 2006 and similar or better results are expected for FY 2007. FTA projects these results to estimate the total new employment sites reached by all grantees.

The calculation methodology is based on the expenditures of selected grantees when compared to the total expenditures of all grantees during the same two-fiscal-year period. In subsequent years, FTA further proposes to supplement this approach by simplifying the data-reporting process, developing profiles of all grantees, and conducting on-site surveys to collect qualitative information about program performance from selected grantees.

The preliminary methodology for projecting the number of employment sites reached in FY 2007 has two elements. Phase I will use existing data collected for FY 2006 to project employment sites reached, based on expenditure level for FY 2007. Phase 2 will involve projections based on actual FY 2006 and FY 2007 cumulative data that will be available in early 2008. Phase 2 involves the collection of 2006 data collected from grantees. If data collected is incomplete, then projections will be made for grantees not reporting, based on data collected in FY 2006 / FY 2007.
Reliability
Oversight contractors review the data and contact grantees to ascertain methodologies on a sample basis, or when the information warrants review.

Details on DOT Mobility Measures
Aviation Delay


Measure
Percent of all flights arriving within 15 minutes of schedule at the 35 Operational Evolution Plan (OEP) airports due to National Airspace System (NAS) related delays. (FY)
Scope
NAS On-Time Arrival is the percentage of all flights arriving at the 35 OEP airports equal to or less than 15 minutes late, based on the carrier flight plan filed with the FAA, and excluding minutes of delay attributed by air carriers to extreme weather (events such as hurricane and earthquake), carrier action, security delay, and prorated minutes for late arriving flights at the departure airport. The number of flights arriving on or before 15 minutes of flight plan arrival time is divided by the total number of completed flights.

A flight is considered on-time if it arrives no later than 15 minutes after its published, scheduled arrival time. This definition is used in both the DOT Airline Service Quality Performance (ASQP), and Aviation System Performance Metrics (ASPM) reporting systems. Air carriers, however, also file up-to-date flight plans for their services with the FAA that may differ from their published flight schedules. This metric measures on-time performance against the carriers filed flight plan, rather than what may be a dated published schedule.

The time of arrival of completed passenger flights to and from the 35 OEP airports is compared to their flight plan scheduled time of arrival. For delayed flights, delay minutes attributable to extreme weather, carrier caused delay, security delay, and a prorated share of delay minutes due to a late arriving flight at the departure airport are subtracted from the total minutes of delay. If the flight is still delayed, that delay is attributed to the NAS and the FAA, and counted as a delayed flight.
Sources
The ASPM database, maintained by the FAA's Office of Aviation Policy and Plans, supplemented by DOT's ASQP causation database, provides the data for this measure. By agreement with the FAA, ASPM flight data are filed by certain major air carriers for all flights to and from most large and medium hubs, and is supplemented by flight records contained in the Enhanced Traffic Management System (ETMS) and flight movement times provided by Aeronautical Radio, Inc. (AIRINC). Data are sufficient to complete ASPM data files for 75 airports. The 35 OEP airports are a sub-set of these 75 airports.
Statistical
Issues
ASQP data is not reported for all carriers, only 19 carriers report monthly into the ASQP reporting system.
 
Completeness
Fiscal year data are finalized approximately 90 days after the close of the fiscal year.
Reliability
The reliability of ASPM is verified on a daily basis by the execution of a number of audit checks, comparison to other published data metrics, and through the use of ASPM by over 1500 registered users. ASQP data is filed monthly with DOT under 14 CFR 234, Airline Service Quality Performance Reports, which separately requires reporting by major air carriers on flights to and from all large hubs.

Details on DOT Global Connectivity Measures
Disadvantaged and Women-Owned Small Businesses


Measures:
  1. Percent share of the total dollar value of DOT direct contracts that are awarded to women-owned businesses. (FY)
  2. Percent share of the total dollar value of DOT direct contracts that are awarded to small disadvantaged businesses. (FY)
Scope
Includes contracts awarded by DOT Operating Administrations through direct procurement. It does not include FAA contracts exempt from the Small Business Act.
Sources
Prior to October 1, 2003, these data were derived from the USDOT Contract Information System (CIS, which fed the old Federal Procurement Data System (FPDS). The CIS included all USDOT contracting activities that reported to the Federal Procurement Data Center (FPDC). Migration to the new Federal Procurement Data System on October 1, 2003 enabled the removal of agency FPDS feeder systems government-wide (including CIS).

New data reports will come directly from FPDS. Data are compiled by USDOT Contracting staff from Department contract documents. Selected information is either transmitted from the operating administration contract writing systems, or manually data-keyed via the FPDS web site, into the FPDS database, which can be queried to compute needed statistics. All USDOT contracts are enumerated.
Statistical
Issues
Until recently the reliability of the Federal Procurement Data System/Next Generation (FPDS/NG) was an issue with DOT and other federal agencies including the Government Accountability Office (GAO). The FPDS is designed to be an accurate and reliable system, as required by the Small Business Act, Section 644(g). However, it is recognized that at least through the transitional periods of FY 2003 through FY 2006, there may be issues of synchronization and data reliability between federal agencies and the FPDS/NG.

DOT currently is required to scrub FPDS/NG data and resubmit it for validation. After re-verifying these data against internal sources, there are no known major errors present in the data. Business types are as identified in the Central Contractor Registration (CCR) database. However, random variation in the number of DOT contracts as well as the number of women-owned and small disadvantaged businesses each year results in some random variation in these measures from year to year.
Completeness
The Federal Procurement Data System (FPDS) is prescribed by regulations as the official data collection mechanism for DOT acquisitions.
Reliability
There is extensive regulatory coverage to ensure data reliability. The system is used to prepare many reports to Congress, the Small Business Administration, and others. Performance goals actual data, as finalized by the Small Business Administration is the only reliable basis for program evaluations as mandated by the Small Business Act, Section 644(g).

Details on DOT Global Connectivity Measures
St. Lawrence Seaway System Availability


Measure
Percent of days in the shipping season that the U.S. portion of the St. Lawrence Seaway is available. (FY)
Scope
The availability and reliability of the U.S. sectors of the St. Lawrence Seaway, including the two U.S. Seaway locks in Massena, N.Y., are critical to continuous commercial shipping during the navigation season (late March to late December). System downtime due to any condition (weather, vessel incidents, malfunctioning equipment) causes delays to shipping, affecting international trade to and from the Great Lakes region of North America. Downtime is measured in hours/minutes of delay for weather (visibility, fog, snow, ice); vessel incidents (human error, electrical and/or mechanical failure); water level and rate of flow regulation; and lock equipment malfunction.
Sources
Saint Lawrence Seaway Development Corporation (SLSDC) Office of Lock Operations and Marine Services.
Statistical
Issues
None.
 
Completeness
As the agency responsible for the operation and maintenance of the U.S. portion of the St. Lawrence Seaway, SLSDC's lock operations unit gathers primary data for all vessel transits through the U.S. Seaway sectors and locks, including any downtime in operations. Data is collected on site, at the U.S. locks, as vessels are transiting or as operations are suspended. This information measuring the System's reliability is compiled and delivered to SLSDC senior staff and stakeholders each month. In addition, SLSDC compiles annual System availability data for comparison purposes. Since SLSDC gathers data directly from observation, there are no limitations. Historically, the SLSDC has reported this performance metric for its entire navigation season (late March/early April to late December). Unfortunately due to reporting timelines, system availability data is only reported through September in this report.
Reliability
SLSDC verifies and validates the accuracy of the data through review of 24-hour vessel traffic control computer records, radio communication between the two Seaway entities and vessel operators, and video and audiotapes of vessel incidents.

Details on DOT Global Connectivity Measures
Bilateral Agreements


Measure
Number of new or expanded bilateral aviation safety agreements implemented. (FY)

The Bilateral Aviation Safety Agreement (BASA) is made up of two parts: (1) an executive agreement signed by the Department of State and Ministry of Foreign Affairs, and (2) one or more implementation procedures signed by the FAA and the other civil aviation authority. The measure is the number of agreements signed with foreign governments.
Scope
Bilateral Agreements related to aviation safety have two components: executive agreements and implementation procedures. The Executive Agreement is signed by the Department of State and the target country's Ministry of Foreign Affairs. It lays the essential groundwork for cooperation between the two governments and their respective aviation authorities. Once executed, the negotiations for the second component, the implementation procedures can proceed. Implementation procedures provide detailed operational safety and certification arrangements between the FAA and the target country's civil aviation authority. The implementation procedure is the operational portion of the bilateral agreement that allows for the reciprocal acceptance of aviation goods and services between the two countries. The target is achieved when either a new Executive Agreement is signed or a new or expanded implementation procedure is concluded with the target country or aviation authority.
Sources
The executive agreements are negotiated and maintained by the Department of State. The implementation procedures are negotiated and concluded by FAA. The official signed document is maintained at the FAA.
Statistical
Issues
None.
 
Completeness
There are no completeness data issues associated with this measure since it is a simple count of the final signed new executive agreement or implementation procedures. This performance target is monitored monthly by tracking interim negotiation steps leading to completion of a BASA and tracking FAA internal coordination of the negotiated draft text.

The final signing of executive agreements is generally out of the control of the FAA. Many sovereign nations view these agreements as treaties that require legislative approval. The FAA and U.S. Government cannot control the timing of legislatures in other countries. Therefore, the FAA will count executive agreements only when signed. The negotiation of implementation procedures is more within FAA's control.

The signed document of the executive agreement constitutes evidence of completion. For implementation procedures, evidence will be either a signed procedure or some form of agreement between both parties that material negotiations are concluded, but a formal signing ceremony is pending. This can take the form of a signed agreement stating that fact, e-mail, meeting minutes, or other mutual documentation.
Reliability
No issues.

Details on DOT Global Connectivity Measures
Reduced Barriers to Trade in Transportation


Measure
Number of potential air transportation consumers (in billions) in international markets traveling between the U.S. and countries with open skies and open transborder aviation agreements (measure revised in FY 2005).
Scope
The number of potential air transportation consumers is the total population of the U.S. and countries with open skies aviation agreements with the U.S. By the end of FY 2007, there were more than 80 open skies agreements. This measurement includes the annual increase in population for the countries where open skies have been achieved, as well as the additional populations for newly negotiated open skies agreements. The estimate for the additional population is based on the median population size of the countries without open skies agreements. The measurement thus reflects the extent to which the liberalization resulting from open skies agreements, negotiated by DOT, increases travel opportunities between the U.S. and countries with previously restricted aviation agreements.
Sources
Estimate of the population of the U.S. and countries with open skies agreements with the U.S., Midyear Population, International Data Base, and U.S. Bureau of the Census (per website).
Statistical
Issues
The International Data Base of the U.S. Bureau of the Census is a reliable source of population estimates. The Bureau's website and publications provide qualifying data notes that more fully describe technical and other issues. These qualifying notes do not significantly affect our analyses.
Completeness
The International Data Base of the U.S. Bureau of the Census is a reliable source of population estimates. The Bureau's website and publications provide qualifying data notes that more fully describe technical and other issues. These qualifying notes do not significantly affect our analyses.
Reliability
The International Data Base of the U.S. Bureau of the Census is a reliable source of population estimates. The Bureau's website and publications provide qualifying data notes that more fully describe technical and other issues. These qualifying notes do not significantly affect our analyses.

Details on DOT Global Connectivity Measures
Enhanced International Competitiveness of U.S. Transportation Providers


Measure
Number of international negotiations conducted annually to remove market-distorting barriers to trade in air transportation.
Scope
The number of international negotiations conducted annually to remove market-distorting barriers to trade in transportation is the number (or rounds) of meetings and negotiations that are conducted in an effort to reach open skies agreements, other liberalized aviation agreements, or to resolve problems. By the end of FY 2007, there were more than 80 open skies agreements, and 19 liberalized (but not open skies) agreements. These numbers, however, do not represent, but understate, the number of negotiating sessions that have historically been held to complete these agreements. The measurement thus reflects an estimate of the extent of and manner by which the DOT might best apply the necessary resources to open the competitive environment and provide increased travel opportunities and economic benefits.
Sources
Estimate of the number of annual negotiating sessions that are required to achieve further international aviation liberalization. It is an internal estimate generated by the Office of the Assistant Secretary for Aviation and International Affairs based on a number of analytical, economic and geopolitical factors.
Statistical
Issues
Due to geopolitical factors, the nature of international aviation negotiations can follow an unpredictable course. It is impossible to gauge or comment upon the data limitations, statistical issues, data completeness and data reliability.
Completeness
Due to geopolitical factors, the nature of international aviation negotiations can follow an unpredictable course. It is impossible to gauge or comment upon the data limitations, statistical issues, data completeness and data reliability.
Reliability
Due to geopolitical factors, the nature of international aviation negotiations can follow an unpredictable course. It is impossible to gauge or comment upon the data limitations, statistical issues, data completeness and data reliability.

Details on DOT Global Connectivity Measures
Travel in Freight Significant Corridors


Measure
Number of freight corridors with an annual decrease in the average buffer index rating. (CY)
Scope
Travel time reliability is a key indicator of transportation system performance. The FHWA uses measured speed data to calculate a Buffer Index (BI) for each freight significant corridor. The BI is a measure of travel time reliability and variability that represents the extra time (or time cushion) that would have to be added to the average travel time to ensure on-time arrival 95 percent of the time.
Sources
Travel time data for freight significant corridors is derived using time and location data from satellite communications equipment on-board mobile commercial vehicles. A Global Positioning Satellite (GPS) device in the vehicle transmits a continuous or periodic signal to an earth orbit satellite. This technology allows commercial vehicles to serve as probes and enables direct measurement of commercial vehicle average operating speeds and travel rates and travel times. Selection of freight significant corridors and highway segments is largely based on the volume of freight moved on the segment.
Statistical
Issues
The key issues are long term viability of data source, sampling size of the commercial vehicle probes, and frequency of the time and position sampling.
Completeness
FHWA is partnering with a vendor that collects automatic vehicle location probe information from a customer base, primarily interstate long-haul carriers. The data provides nationwide coverage from approximately 250,000 vehicles in the United States plus additional vehicles in Canada. Long haul carrier fleet managers arrange with the vendor to equip their vehicles with GPS probes. Carriers arrange with the vendor to have signal sent to vehicles and readings taken as often as every 15 minutes. The interval between probe readings is dependent upon the subscription and services contracted for by each individual carrier. These intervals may range from every 15 minutes to every two hours. The data transmitted are: truck ID, latitude, longitude, date and time, and interstate route. FHWA processes and manages the data provided by the vendor to derive the information for this measure
Reliability
Probe vehicle performance systems are designed to provide travel time, speed and delay information without traditional fixed-location traffic monitoring and data collection systems. Probe-based systems enable coverage of much larger geographic areas (i.e., entire roadway networks) without the cost of building fixed-location traffic data collection systems throughout those networks. This technique takes advantage of the significant reductions in the cost of GPS devices that report current location and time information with a high degree of accuracy. When placed in vehicles and combined with electronic map information, GPS devices are the primary component of excellent vehicle location systems. Storage and analysis of the GPS location data allow for very accurate roadway performance measurement. To provide reliable roadway performance estimates, a large enough number of vehicles must be equipped with GPS to provide an unbiased measure of roadway performance, and to provide the temporal and geographic diversity desired by the performance measurement system. A significant drawback to probe vehicle-based performance monitoring is that it does not provide information about the level of roadway use (vehicle volume), but only provides information about the speeds and travel times being experienced.

Details on DOT Environmental Stewardship Measures
Exemplary Ecosystems (Environment)


Measure
Number of exemplary ecosystem initiatives. (FY)
Scope
An exemplary ecosystem initiative is an action or measure that will help sustain or restore natural systems and their functions and values, using an ecosystem or landscape context. The measure is a cumulative count of the number of exemplary ecosystem initiatives initiated. Ecosystem/habitat projects are identified as exemplary if they are unique or highly unusual in geographic scope; use cutting edge science or technology; attain a high level of environmental standards; achieve high quality of results; and/or recognized by environmental interests as being particularly valuable or noteworthy.
Sources
A State DOT and FHWA field office submits a list of ecosystem and habitat conservation initiatives for consideration to the FHWA.
Statistical
Issues
The data may not represent all ecosystem and habitat conservation initiatives underway. Submittals are made at the discretion of the States and FHWA field offices.
Completeness
All identified exemplary ecosystem initiatives are included. However, there may be other potential qualifying initiatives that have not been identified.
Reliability
The identification of exemplary ecosystem initiatives may not be consistent across all States and FHWA field offices. While the criteria are carefully defined and complete, they are still subject to interpretation.

Details on DOT Environmental Stewardship Measures
DOT Facility Cleanup


Measure
Percent of DOT facilities categorized as No Further Remedial Action Planned (NFRAP) under the Superfund Amendments and Reauthorization Act (SARA). (FY)
Scope
EPA maintains a Federal Facility Hazardous Waste docket which contains information regarding Federal facilities that manage hazardous wastes or from which hazardous substances have been or may be released. DOT facilities listed on the docket are discussed in the Annual SARA report sent to Congress each year. EPA regional offices make the determination to change facility status to NFRAPs on the docket.
Sources
EPA Federal Facility Hazardous Waste docket which is issued twice a year.
Statistical
Issues
None.
 
Completeness
The primary criterion for NFRAP is a determination that the facility does not pose a significant threat to the public health or environment. Responsibility for these facilities may be with FAA, FHWA, or FRA. NFRAP decisions may be reversed if future information reveals that additional remedial actions are warranted. The OAs' activities are controlled, to a degree, by interaction and decisions made by EPA Regional personnel. This measure is current and has no missing data.
Reliability
DOT uses this data to prioritize cleanup activities and attendant resource levels. However, there is insufficient time to complete remediation prior to the close of the FY for any sites added in the July report.

Details on DOT Environmental Stewardship Measures
Mobile Source Emissions


Measure
Twelve-month moving average number of area transportation emissions conformity lapses. (FY)
Scope
The transportation conformity process is intended to ensure that transportation plans, programs, and projects will not create new violations of the National Ambient Air Quality Standards (NAAQS), increase the frequency or severity of existing NAAQS violations, or delay the attainment of the NAAQS in designated non-attainment (or maintenance) areas.
Sources
The FHWA and FTA jointly make conformity determinations within air quality non-attainment and maintenance areas to ensure that Federal actions conform to the purpose of State Implementation Plans (SIP). With DOT concurrence, the EPA has issued regulations pertaining to the criteria and procedures for transportation conformity, which were revised based on stakeholder comment.
Statistical
Issues
None.
 
Completeness
If conformity cannot be determined within certain time frames after amending the SIP, or if three years have passed since the last conformity determination, a conformity lapse is deemed to exist and no new non-exempt projects may advance until a new determination for the plan and Transportation Improvement Program (TIP) can be made. This affects transit as well as highway projects.

During a conformity lapse, FHWA and FTA can only make approvals or grants for projects that are exempt from the conformity process (pursuant to Sections 93.126 and 93.127 of the conformity rule) such as a safety project and transportation control measures (TCM) that are included in an approved SIP. Only those project phases that have received approval of the project agreement, and transit projects that have received a full funding grant agreement, or equivalent approvals, prior to the conformity lapse may proceed. This measure is current and has no missing data.
Reliability
There are no reliability issues. FHWA and FTA jointly make conformity determinations within air quality non-attainment and maintenance areas to ensure that Federal actions conform to the purpose of the SIP.

Details on DOT Environmental Stewardship Measures
Hazardous Liquid Materials Spilled from Pipelines


Measure
Tons of hazardous liquid materials spilled per million ton-miles shipped by pipelines. (CY)
Scope
Liquid pipeline accidents (spills) are reportable under 49 CFR 195.50 if there is a release of hazardous liquid or carbon dioxide and any one of the following:

  1. unintentional explosion or fire;
  2. release of five gallons or more (except certain maintenance activities);
  3. death or injury requiring hospitalization; or,
  4. estimated property damage, including costs of cleanup and recovery, value of lost product, and other property damage exceeding $50,000.

Data are adjusted/normalized for time series comparisons to account for changes in reporting criteria over time. This includes screening out hazardous liquid spills of less than 50 barrels (or five barrels for highly-volatile liquids) unless the accident meets one of the other reporting criteria. Highly-volatile liquid (HVL) spills are not included in this performance measure. HVLs evaporate on release and don't impact the environment in the usual way that other liquid petroleum products do.
Sources
DOT/Pipeline and Hazardous Materials Safety Administration (PHMSA) Incident Data-derived from Pipeline Operator reports submitted on PHMSA Form F-7000.1. Ton-mile data are calculated using a base figure reported in a 1982 USDOT study entitled Liquid Pipeline Director and then combined with data from the Association of Oil Pipe Lines and the Oil Pipeline Research Institute.
Statistical
Issues
A response percentage cannot be calculated as the actual population of reportable incidents cannot be precisely determined. Results in any single year need to be interpreted with some caution. Targets could be missed or met as a result of normal annual variation in the number of reported incidents.

The performance measure is a ratio of “Tons Net Loss” and “Ton-Miles Shipped.” Uncertainty in either the numerator or the denominator can have a large effect on the overall uncertainty. Some factors of possible variance in the numerator include: 1) a few large spills can make PHMSA miss this goal, and 2) even when the total number of spills fluctuates, the net volume lost may increase. The denominator may fluctuate with the overall economy, i.e., the volume shipped increases with economic boom and decreases when the economy slows down. The environmental metric tracks a highly variable trend and PHMSA has noted in the past that the variability of this metric warrants close study.

The past long term pattern for the trend was to generally meet or miss the goal every other year as the actual performance bounced above and below the trend line regularly. PHMSA continues to lessen the overall standard deviation of the metric over time (the performance of the trend is getting statistically more sound over time). This measure also has continued a general downward trend even though it bounces above and below the trend line over time.
Completeness
Compliance in reporting is very high and most incidents that meet reporting requirements are submitted. Operators must submit reports within 30 days of an incident or face penalties for non-compliance.

The reported estimates are based upon incident data reported in January through June 2007. There may be a 60-day lag in reporting and compiling information in the database for analysis. Traditionally, there are more incidents in the summer than the winter. Preliminary estimates are based on data available as of middle of August, with six months of data through the end of June. The CY 2007 estimate is a projection using both a seasonal adjustment (using a 10-year baseline) and a separate adjustment to account for the historical filing of late reports (92.5 percent of reports for January-June were filed by this time last year).
Reliability
Projection of the environmental measure is less precise due to the nature of pipeline spills. A single large spill (10,000 barrels or more) can easily dwarf the total for all other CY spills combined. These large spills cannot be factored into a projection model due to their magnitude and infrequent and unpredictable occurrences. Thus, projections for the remaining six months of this CY assume that the average spill volume in the past six months will remain the same in the next six months. However, any large spill of non-highly volatile hazardous liquid in the next six months can move the projection upwards.

PHMSA routinely cross-checks accident reports against other sources of data, such as the telephonic reporting system for incidents requiring immediate notification provided to the National Response Center (NRC). PHMSA is developing a Best Management Practice to ensure quality of the incident data.

Data are not normalized to account for inflation. A fixed reporting threshold ($50,000) for property damage results in an increasing level of reporting over time. This threshold was set for hazardous liquid accidents in 1994.

Data are not normalized to account for the subjective judgment of the operator in filing reports for accidents that do no meet any of the quantitative reporting criteria. This may result in variations over time due to changes in industry reporting practices.

Lack of additional information for ton-mile data raises definitional and methodological uncertainties about the data's reliability. Moreover, the three different information sources introduce data discontinuities, making time comparisons unreliable. (National Transportation System (NTS) 2002).

PHMSA uses this data in conjunction with pipeline safety data in prioritizing compliance and enforcement plans. However, beginning in FY 2008, PHMSA will begin reporting on the number of spills in high consequence areas as a new performance measure to replace the current one. This will address many of the reliability issues with the current measure.

Details on DOT Environmental Stewardship Measures
Aircraft Noise Exposure


Measure
Percent reduction in the number of people within the U.S. who are exposed to significant aircraft noise levels (Day/Night Average Sound Level (DNL) 65 decibels or more) from the three-year average for 2000 to 2002. (FY)
Scope
Residential population exposed to aircraft noise above Day-Night Sound Level of 65 decibels around U.S. airports.
Sources
In 1997, the FAA initiated a project to collect airport noise analysis databases for a large number of the world's airports. This sample database of airports would be the basis for assessing worldwide trends that would occur as the result of stringency, different land-use planning initiatives and operational procedures. The objective was to develop a tool that could be used by the Committee on Aviation Environmental Protection (CAEP) under the International Civil Aviation Organization (ICAO). Previous attempts by CAEP to globally assess aircraft noise exposure had limited success. The proposed FAA methodology had much more promise, as the number of sample databases was large and has since grown to around 200. Furthermore, a generalized methodology was included to account for airports for which noise databases did not exist. Based on the initial success of the FAA activity, the fourth meeting of CAEP (CAEP4) recommended that a task group be formed to complete the development of this tool for CAEP analysis.

This group and subsequently the model became known as MAGENTA (Model for Assessing Global Exposure form Noise of Transport Airplanes). The MAGENTA population exposure methodology has been thoroughly reviewed by this ICAO task group and was validated for several airport specific cases. MAGENTA played an important role in the setting of new international aircraft noise standards by CAEP in 2001. CAEP used MAGENTA to assess the benefits (reduction in number of people exposed to aircraft noise) of several noise stringency proposals. FY 2000 was the first year MAGENTA was used to track the aircraft noise exposure goal in the DOT Performance Plan.

A U.S. version of the global MAGENTA model, which used input data to determine the noise exposure in the U.S. on aircraft and operations specific to U.S. airports, was developed in 2002. The general, regional FESG forecast used in the CAEP version of MAGENTA was replaced by the FAA Terminal Area Forecast (TAF), which provides current and accurate information on how operations will increase on an airport specific basis.

The new U.S. version of MAGENTA also uses updated population data from the 2000 Census. The U.S. version of MAGENTA has evolved over time as more comprehensive databases were incorporated to improve the accuracy of the model. The data source for airport traffic changed from the Official Airline Guide (OAG) to the FAA Enhanced Traffic Management System (ETMS).

Unlike OAG, the ETMS database includes unscheduled air traffic, which allows for more accurate modeling of freight, general aviation, and military operations. The ETMS also provides more details on aircraft type for a more accurate distribution of aircraft fleet mix. Under the old model, unscheduled traffic was estimated and adjustments in the number of people exposed were made at the national level.

Data on the number of people relocated through the Airport Improvement Program are collected from FAA regional offices. Local traffic utilization data are collected from individual airports and updated periodically.
Statistical
Issues
This measure is derived from model estimates that are subject to errors in model specification. FAA has replaced the actual number of people exposed to significant noise with the percent decrease in the number of people exposed, measured from the three-year average for calendar year 2000-2002. Moving to the three-year average stabilizes noise trends, which can fluctuate from year to year and are affected by unusual events such as the 9/11 attacks and the subsequent economic downturn. The 2000-2002 base time periods includes these events and is the same three-year period used for the emissions goal.

The move from actual numbers to percentages helps avoid confusion over U.S. noise exposure trends caused by annual improvements to the noise exposure model. A major change to MAGENTA resulted in a significant improvement in the estimate of the number of people exposed to significant noise levels around US airports. Until now, the scope of the measure included scheduled commercial jet transport airplane traffic at major U.S. airports. With access to better operational data sources, the scope of the MAGENTA calculation has expanded to include unscheduled freight, general aviation, and military traffic. The expanded scope of operations results in an increase in the estimate of the number of people exposed to significant noise.

The growth in the number of people exposed to significant noise results from improvements in measurement, not a worsening in aviation noise trends. Planned improvements to MAGENTA will continue to increase the estimate of the number of people exposed to aircraft noise, giving the false impression that aircraft noise exposure is increasing. Changing the noise performance goal to an annual percent change in aircraft noise exposure will better show the trend in aircraft noise exposure. The change will also make the Government Performance Review Act (GPRA) goal consistent with FAA's Flight Plan goal.
Completeness
No actual count is made of the number of people exposed to aircraft noise. Aircraft type and event level are current. However, some of the databases used to establish route and runway utilization were developed from 1990 to 1997, with many of them now over seven years old. Changes in airport layout including expansions may not be reflected. The FAA continues to update these databases as they become available. The benefits of Federally-funded mitigation, such as buyout, are accounted for.

The noise studies obtained from U.S. airports have gone through a thorough public review process; either under the National Environmental Policy Act (NEPA) requirements or as part of a land use compatibility program.
Reliability
The Integrated Noise Model (the core of the MAGENTA model) has been validated with actual acoustic measurements at both airports and other environments such as areas under aircraft at altitude. External forecast data are from primary sources. The MAGENTA population exposure methodology has been thoroughly reviewed by an ICAO task group and was most recently validated for a sample of airport-specific cases.

Details on DOT Security Measures
Shipping Capacity


Measure
Percent of DOD-required shipping capacity, complete with crews, available within mobilization timelines. (FY)
Scope
This measure is based on the material availability of 44 ships in the Maritime Administration's Ready Reserve Force (RRF) and approximately 120 ships enrolled in the Voluntary Intermodal Sealift Agreement (VISA) program, which includes 60 ships enrolled in the Maritime Security Program (MSP).

The performance measure represents the number of available ships (compared to the total number of ships in the RRF and VISA) that can be fully crewed within the established readiness timelines. Crewing of the RRF vessels is accomplished by commercial mariners employed by private sector companies under contract to the government. Currently there are more qualified mariners than jobs, even in the most under represented categories. However, due to the voluntary nature of this system, there is no guarantee that sufficient mariners will be available on time and as needed especially during a large, rapid activation.
Sources
Material availability of ships. Maritime Administration records (and information exchanged with DOD) on the readiness/availability status of each ship by the Office of Sealift Support (MSP/VISA ships) and the Office of Ship Operations (RRF ships). Typical reasons why a ship is not materially available include: the ship is in drydock, the ship is undergoing a scheduled major overhaul, or the ship is undergoing an unscheduled repair. The Maritime Administration and DOD also maintain records of the sealift ships enrolled in the MSP and VISA and their crew requirements.

Availability of mariners. The Maritime Administration, through their Mariner Outreach System, extracts the number of qualified mariners from the data recorded in the U.S. Coast Guard's Merchant Mariner Licensing and Documentation (MMLD) system. The willingness and availability of these mariners to sail is then estimated using all available information including total U.S. requirements for deep sea mariners, recent sea service, and mariner surveys.
Statistical
Issues
None.
 
Completeness
Data are complete.
Reliability
The data is reasonably reliable and useful in managing the reserve fleet readiness program.

Details on DOT Security Measures
DoD-Designated Port Facilities


Measure
Percent of DoD-designated commercial strategic ports for military use that are available for military use within DoD established readiness timelines.
Scope
The measure consists of the total number of DOD-designated commercial strategic ports for military use that forecast their ability to able to meet DOD-readiness requirements within 48-hours of written notice from the Maritime Administration, expressed as a percentage of the total number of DOD-designated commercial strategic ports. Presently, there are 15 DOD-designated commercial strategic ports. Port readiness is based on monthly forecasts submitted by the ports and semi-annual port readiness assessments by the Maritime Administration in cooperation with other National Port Readiness Network partners.

The semi-annual port assessments provide data or other information on a variety of factors, including the following: the capabilities of channels, anchorages, berths, and pilots/tugboats to handle larger ships; rail access, rail restrictions, rail ramp offloading areas, and rail storage capacities; the availability of trained labor gangs and bosses; number and capabilities of available cranes; long-term leases and contracts for the port facility; distances from ports to key military installations; intermodal capabilities for handling containers; highway and rail access; number of port entry gates; available lighting for night operations; and number and capacity of covered storage areas and marshalling areas off the port.
Sources
The Maritime Administration's data are derived from monthly reports submitted by the commercial strategic ports and from MARAD/DOD semi-annual port assessments.
Statistical
Issues
None.
 
Completeness
Data are complete.
Reliability
The data is reasonably reliable according to the Bureau of Transportation Statistics and useful in managing its port readiness program.

Details on DOT Security Measures
Transportation Capability Assessment for Readiness


Measure
Transportation Capability Assessment for Readiness Index Score. (FY)
Scope
The Office of Emergency Transportation (OET) was transferred to the Office of Intelligence, Security, and Emergency Response in Fiscal Year 2005. OET measures its performance in meeting the Homeland and National Security Performance goal to “prepare the Nation's transportation system for a rapid recovery from intentional harm and natural disasters” by assessing progress in six functional areas: (1) Crisis Management Center, (2) U.S. Disaster Response, (3) Training and Exercises, (4) Continuity of Operations (COOP), (5) Continuity of Government (COG), and (6) International Response. A new performance measure is under development to capture the performance of all of the Office of Intelligence, Security, and Emergency Response.
Sources
This measure is based on a self-assessment score determined by OET. Each functional area is rated based on between 1 and 5 specific criteria. The criteria are:

Function 1- Crisis Management Center (20 points)
Does the Secretary's Crisis Management Center (CMC) have adequate resources, such as communications, technology, and fully ready technical staff? (10 points)

Have the CMC workers been trained and participated in at least two exercises per year? (10 points)

Function 2-U.S. Disaster Response (20 points)
Do the Regional Emergency Transportation Coordinators (RETCO) and Regional Emergency Response Teams have the necessary time, skills and equipment to successfully carry out their natural disaster and WMD functions? (6 points)

Is there adequate secure communications with state and local government and the transportation community when dealing with WMD or national security crises? (5 points)

Has the National Response Plan (NRP) Transportation Annex been updated in the past 2 years? (3 points)

Within the past 2 years, have all ten regions updated their NRP Transportation Annexes? (3 points)

Have DOT and DoD sufficiently coordinated their transportation functions? (3 points)

Function 3-Training and Exercises (20 points)
Have Regional Response Teams and key personnel from state and local government and industry participated in DOT sponsored training and exercises, and did the training and exercises include both natural disasters and national security crises? (20 points)

Function 4-Continuity of Operations (COOP) (20 points)
Is DOT's primary COOP site fully functional? (10 points)

Is the OST COOP plan updated at least once every two years? (3 points)

Have the Operating Administrations' COOP Plans been updated in the last 2 years? (4 points)

Has there been at least one COOP exercise or activation for both OST as well as all DOT modes in the last 12 months? (3 points)

Function 5-Continuity of Government (COG) (10 points)
Does DOT have a complete National Emergency Management Team (NEMT)? (5 points)

Have the NEMT team members received at least 1 training/exercise session during the year? (5 points)

Function 6-International Response (10 points)
Has DOT, as a U.S. representative to NATO, participated in at least 4 key NATO meetings and 2 exercises annually? (8 points)

Has DOT sufficiently coordinated its international disaster role with the U.S. State Department and its Civil Reserve Air Fleet activities with the DoD? (2 points)
Statistical
Issues
None.
 
Completeness
The measure is complete and reflects the combined score of all evaluation criteria.
Reliability
Scores are reliable to the extent that specific quantitative evaluation criteria are available for each of the questions used to rate the functions.

Details on DOT Organizational Excellence Measures
DOT Major System Acquisition Performance


Measures
  1. For major DOT aviation systems, percentage of cost goals established in the acquisition project baselines that are met.
  2. For major DOT aviation systems, percentage of scheduled milestones established in acquisition project baselines that are met.
Scope
This performance measure encompasses acquisition management data for all of DOT's major systems acquisition contracts, primarily in the FAA, but also from any office procuring a major system as defined in OMB Circular A-11, and DOT's Capital Programming and Investment Control order.
Sources
The data for acquisition programs comes from each DOT organization procuring major systems.

FAA tracks and reports status of all schedule and cost performance targets using an automated database, providing a monthly Red, Yellow, or Green assessment that indicates their confidence level in meeting their established milestones. Comments are provided monthly that detail problems, issues, and corrective actions, ensure milestones and cost are maintained within the established performance target. The performance status is reported monthly to the FAA Administrator through FAA Flight Plan meetings.
Statistical
Issues
The programs that are selected each fiscal year represent a cross section of programs within the Air Traffic Organization. They include programs that have an Exhibit 300 as well as what is referred to as “buy-by-the-pound” programs. The latter are typically not required to undergo a standard acquisition life cycle process. There is no bias with the selection of milestones. The milestones selected represent the program office's determination as to what effort they deem “critical” or important enough to warrant inclusion in the Acquisition Performance goal for the year. Typically there are anywhere from two to four milestones. Interim milestones are also tracked but not included in the final performance calculation.
Completeness
This measure is current with no missing data. Each DOT organization maintains its own quality control checks for cost, schedule, and technical performance data of each major systems acquisition in accordance with OMB Circulars A-11, A-109, and A-130, Federal Acquisition Regulations, and Departmental orders implementing those directives and regulations.
Reliability
Each DOT organization having major system acquisitions uses the data during periodic acquisition program reviews, for determining resource requests. It is also used during the annual budget preparation process, for reporting progress made in the President's Budget and for making key program management decisions.

Details on DOT Organizational Excellence Measures
Major DOT Infrastructure Project Cost and Schedule Performance


Measure
  1. For major Federally funded infrastructure projects, percentage that meet schedule milestones established in project or contract agreements, or miss them by less than 10 percent. (FY)
  2. For major Federally funded infrastructure projects, percentage that meet cost estimates established in project or contract agreements, or miss them by less than 10 percent. (FY)
Scope
Active FTA New Starts projects with Full Funding Grant Agreements larger than $1 billion; FHWA projects with a total cost of $1 billion or more, or projects approaching $1 billion with a high level of interest by the public, Congress, or the Administration; and FAA runway projects with a total cost of $1 billion or more.
Sources
FTA - FTA uses independent reviews and third-party assessment providers such as the Corps of Engineers and other oversight contractors to validate the accuracy of project budgets and schedules before grantees are awarded Full Funding Grant Agreements. Project/Financial Management Oversight contractors review project budgets on a monthly basis and FTA assesses projected total project costs against baseline cost estimates and schedules.

FHWA - The percent cost estimates and scheduled milestones for a FHWA Major Project are measured from when the Initial Financial Plan (IFP) is prepared and approved to the required Annual Project Update. The update contains the latest information about the cost and schedule for each of the Major Projects. Division Office Project Oversight Managers provide monthly status reports as a supplement to the Annual Update.

FAA - Project cost performance for each major project is measured from cost estimates submitted by the airport sponsor to support its letter of intent (LOI) and actual expenditure data from FAA data sources (for grants) and airport sponsor submissions (for overall project cost). Project schedule performance is measured from the Runway Template Action Plan (RTAP), as specified in the National Airspace System Operational Evolution Partnership.
Statistical
Issues
FTA - Scheduled milestone achievement is measured by the difference between the actual Revenue Operations Date and the date of the execution of the Full Funding Grant Agreement divided by the difference between the Revenue Operations Date in the Full Funding Grant Agreement and the date of execution of the Full Funding Grant Agreement. Cost estimate achievement is measured by the actual Total Project Cost divided by the Total Project Cost in the Full Funding Grant Agreement.

FHWA - A scheduled milestone is defined as being achieved upon completion of the project. Major Projects generally require 6-10 years from an IFP to completion. Cost estimates are prepared by comparing the costs in the most recent Annual Update to the IFP estimate. Because of the small number of Major Projects, FHWA may not meet its target if only a few projects show cost increases.

FAA - Schedule completion performance is measured for two milestones-the project design and the project construction. A project milestone is considered to meet the performance target if actual cumulative rate of completion is not more than 10 percent behind scheduled cumulative rate of completion, using the RTAP schedule as a base. For example, a 36-month schedule would allow a 3.6 month delay at any point in the schedule.

Cost performance is measured by comparing cumulative actual costs incurred at the end of each fiscal year with cumulative costs shown in the scheduled of costs submitted with the LOI application. A project will be considered to meet the cost performance target if cumulative costs are no more than 10 percent higher than projected costs in the cost schedule.
Completeness
FTA - This measure is current with no missing data. The information is currently tracked with an in-house MS Excel database. A Web-based database, FASTTrak, is being developed to track this type of project information in the future. The measures are calculated monthly by an FTA Headquarters Engineer, checked by the Team Leader and reviewed by the Office Director.

FHWA - The FHWA Major Projects Team maintains the project schedules and cost estimate information in a spreadsheet, which is updated when a Project IFP is approved and/or the Annual Update is received and accepted. The data is available and reported on a semi-annual basis.

FAA - Federal financial commitments to airport sponsors are tracked by two automated systems, the System of Airports Reporting (SOAR) and the Delphi financial system. These systems are updated immediately when a grant payment is made or a grant is amended or closed-out. The FAA relies on the airport sponsor to report actual project costs on a quarterly basis. Project design and construction milestones (scheduled and actual) are contained in the RTAP and developed by all involved FAA lines of business, the airport sponsor and airlines. The RTAP is comprised of tasks that must be considered when commissioning the runway and assigns accountability to the airport, airline, and FAA allowing early identification and resolution of issues that might impact the runway schedule.
Reliability
FTA - Calculations of schedule achievement are based on month of this report, and not on projected Revenue Operations Date. Re-calculations of schedule and cost baselines are made to reflect amendments to the Full Funding Grant Agreements. FTA uses independent reviews and third-party assessment providers such as the Corps of Engineers and other oversight contractors to validate the accuracy of project budgets and schedules before grantees' are awarded Full Funding Grant Agreements. FTA continues to work to improve its rigorous oversight program and has made project cost and budget performance a core accountability of every senior manager in the agency.

FHWA - Both the IFP and the Annual Update undergo a rigorous review by the Division Office and the Major Projects Team prior to approval and acceptance.

FAA - Reporting of Federal financial commitments to airport sponsors is done in accordance with FAA policy and guidance related to administering the Airport Improvement Program (AIP) and the authorizing statute. The FAA's AIP Branch monitors FAA regional offices for compliance with policy and guidance, including input into SOAR and Delphi, and conducts periodic regional evaluations. Actual project costs reported by the airport sponsor are verified by an annual single audit required by OMB. Such audits cover the entire financial and compliance operation of the airport sponsor's governing body. Status of the project design and construction schedule contained in the RTAP is updated quarterly, based on meetings held with the airport sponsor and airlines.

Details on DOT Organizational Excellence Measures
Transit Grant Process Efficiency


Measure
Percent of transit grants obligated within 60 days after submission of a completed application. (FY)
Scope
FTA grants obligated during a fiscal year period for major programs: Urbanized area, non-Urbanized area, and Elderly and Persons with Disabilities formula grants; Capital grants; Job Access and Reverse Commute grants; Over-The-Road Bus grants; and Planning grants.
Sources
FTA internal databases including the Transportation Electronic Award Management (TEAM) system.
Statistical
Issues
Processing time is calculated from submission date to obligation date. Zero-dollar, non-funding grant amendments are excluded from analysis.
Completeness
Data are current with no missing data, since FTA uses internal databases, including the TEAM system. All grants obligated during the fiscal year for the selected programs (see Scope section) are included in the original data set. In rare cases where the submission date is omitted (which prevents processing time calculation), missing dates are researched and added to the database prior to reporting. The zero-dollar amendments are excluded because they are not representative of the grant processing action being tested.
Reliability
The files that contain raw data from TEAM have been tested to ensure that all fiscal-year-to-date obligated grants are included and that data is current. Report programs screen various date fields to identify any missing or out-of-sequence dates that would skew averages; dates are corrected prior to reporting. Reconciliation reports of TEAM data are produced monthly and anomalies are explored and resolved. Detailed monthly grant processing progress reports provide management tools to the Regional Administrators, who continue to make this goal a top priority.

SUMMARY OF FINANCIAL STATEMENT AUDIT AND MANAGEMENT ASSURANCES

TABLE 1. SUMMARY OF FINANCIAL STATEMENT AUDIT

Audit Opinion: Unqualified
Restatement: Yes

Material
Weaknesses
Beginning
Balance
New Resolved Consolidated Ending
Balance
Timely Processing of Transactions
and Accounting for Property, Plant &
Equipment, including the Construction
in Progress Account at the FAA
Green check mark. 1
Financial Management, Reporting &
Oversight at the HTF
Green check mark.   Green check mark.   0
Total Material Weaknesses 2 1 1
TABLE 2. SUMMARY OF MANAGEMENT ASSURANCES

Effectiveness of Internal Control over Financial Reporting (FMFIA, Section 2)
Statement of Assurance: Qualified

Material
Weaknesses
Beginning
Balance
New Resolved Consolidated Reassessed Ending
Balance
Timely Processing of Transactions
and Accounting for Property, Plant &
Equipment, including the Construction
in Progress Account at the FAA
Green check mark.   1
Financial Management, Reporting &
Oversight at the HTF
Green check mark.   Green check mark.     0
Total Material Weaknesses 2 1   1

 

Effectiveness of Internal Control over Operations (FMFIA, Section 2)
Statement of Assurance: Qualified

Material
Weaknesses
Beginning
Balance
New Resolved Consolidated Reassessed Ending
Balance
Weaknesses in the Stewardship and
Oversight of Federal-Aid Projects
Administered by Local Program Agencies
Green check mark.   1
Total Material Weaknesses   1       1

 

Conformance with Financial Management System Requirements (FMFIA, Section 4)
Statement of Assurance: Qualified
Non-Conformances Beginning
Balance
New Resolved Consolidated Reassessed Ending
Balance
Integrated Financial Management Systems Green check mark.   Green check mark.     0
Federal Accounting Standards Green check mark.         1
Total Non-Conformances 2   1     1

 

Conformance with Federal Financial Management Improvement Act (FFMIA)
Overall Substantial Compliance Agency
Yes or No
Auditor
Yes or No
1. System Requirements Yes Yes
2. Accounting Standards No No
3. USSGL at Transaction Level Yes Yes
DEPARTMENT OF TRANSPORTATION
FEDERAL AVIATION ADMINISTRATION
PENDING MATERIAL WEAKNESS

HIGH RISK AREA: Timely Processing of Transactions and Accounting for Property, Plant, and Equipment, including the Construction in Progress Account & FFMIA Non-Compliance.

EXECUTIVE SUMMARY MILESTONES PLANNED DATES
O=Original
L=Last Year
C=Current

How shall we fix it? FAA will revised and implement policies, procedures and controls to improve the capitalization and retirement of Property, Plant & Equipment (PP&E).

How will we know it's fixed?

1. Policies and procedures support auditable PP&E balance.

2. Increased oversight of the capitalization process.

3. Monitoring controls indicate policies and procedures are being followed.

4. Quality review of accounts indicates project activity is conducted property.

Planned (Near-Term)
1. Develop and implement business process improvement for the timely capitalization and retirement of PP&E.
C - 12/2007
2. Formalize organizational responsibility and oversight of property capitalization efforts. C - 12/2007
3. Identify additional preventative and detective controls and initiate changes, when necessary, to ensure proper capitalization and retirement of FAA assets. C - 12/2007
4. Continue to conduct training on the capitalization process. C - 03/2008
5. Improve quality control review procedures at headquarters and in the regions to ensure capitalized assets are complete, accurate, and properly valued during the construction and close-out of construction in progress projects. C - 03/2008
6. Continue to improve the process to ensure that assets placed into service are properly supporting by appropriate documentation per FAA policy. C - 06/2008
Completion Date: (Overall completion dates for correcting entire material weakness or material nonconformance). C - 06/2008
DEPARTMENT OF TRANSPORTATION
FEDERAL HIGHWAY ADMINISTRATION
PENDING MATERIAL WEAKNESS

HIGH RISK AREA: Weaknesses in the Stewardship and Oversight of Federal-Aid Projects Administered by Local Program Agencies (LPA).

EXECUTIVE SUMMARY MILESTONES PLANNED DATES
O=Original
L=Last Year
C=Current

How shall we fix it?
FHWA will work with State DOTs to identify proper stewardship and oversight functions to ensure Federal-aid requirements on met on LPA-administered projects.

How will we know it's fixed?

1. Policies and procedures support auditable results.

2. Increased oversight of the projects administered by LPAs.

3. Monitoring controls indicate policies and procedures are being followed.

4. Quality reviews of LPA-administered projects indicate that Federal-aid requirements are being met.

Planned (Near-Term)
1. Initiate evaluation of State DOT's existing processes and procedures.
C - 09/2007
2. Evaluate the need for additional process reviews and begin those reviews. C - 09/2007
3. Initiate discussions with the State DOT on the development or enhancement of their LPA project oversight program. C - 09/2007
4. Begin analyses and development of regulations that may be necessary to more formally establish a structured LPA project oversight program. C - 09/2007
5. Continue process reviews as needed. C - 09/2007
6. Submit detailed corrective action plans as appropriate to address development needs and/or corrective measures to assure the State DOT has or will have a comprehensive LPA project oversight program. C - 01/2008
7. Update report to the LPOC on whether the State DOT has, or is working to develop, a comprehensive LPA project oversight program. C - 04/2008
8. Report to the LPOC on whether the State DOT has a comprehensive LPA project oversight program. C - 10/2008
9. As appropriate, complete the rulemaking process to implement any needed regulations that more formally establish a structured LPA project oversight program. C - 10/2009
Completion Date: (Overall completion dates for correcting entire material weakness or material nonconformance). C - 10/2009

IPIA REPORTING DETAILS

1. IMPROPER PAYMENT PROGRAM RISK ASSESSMENT DESCRIPTION

In prior years, the Department identified the following ten programs as being susceptible to significant improper payments. At that time, the Department identified the ten programs in the table below as having the highest potential for improper payments.

Operating Administration Program
Federal Highway Administration Federal-aid Highway Program - State Project*
Federal Lands Highway Program - Contracts
Federal Aviation Administration Operations
Facilities and Equipment
Airport Improvement Program*
Federal Transit Administration Capital Investment Grants*
Formula Grants*
Office of the Secretary of Transportation Working Capital Fund
DOT Payroll**
Federal Railroad Administration Grants
*Identified in the former Section 57 of OMB Circular A-11
**For administrative purposes, payroll was reviewed as a single program for all of DOT
Bolded programs were included in the FY 2007 nationwide IPIA review

In accordance with Improper Payments Information Act (IPIA) requirements and OMB guidelines, during FY 2004 and 2005 six of the Programs reflected in the Table above were subject to a risk assessment and an in-depth improper payment review, including a review of payments by the Department to grantees. No improper payments exceeding both 2.5 percent of program payments and $10 million were found. The six programs were subject to a risk assessment based on the following criteria: Gross Expended Amount, Complexity of Payments, Established Internal Controls and Oversight, Type of Program Recipient, Number of Program Recipients, Volume of Payments, Probability of Growth, and Changes in the Program from the previous year. The risk criterion was used to determine the sampling size for each program. From that, each program underwent an in depth statistically based improper payment review.

Based on the FY 2004 and 2005 reviews, the Department concluded that the six programs subject to the risk assessment and improper payment test procedures were not susceptible to significant improper payments as defined by the OMB. For the remaining four programs, because of the significance of grantee payments and the fact that such payments were not tested under previous efforts due to a lack of data required for testing at the Federal level, additional testing was required. The four programs are the Federal Highway Administration (FHWA) Federal-aid Highway Program, Federal Aviation Administration (FAA) Airport Improvement Program, Federal Transit Administration (FTA) Formula Grants Program, and the FTA Capital Investment Grants Program. Because of program and funding changes, the Department was uncertain at the beginning of FY 2007 as to whether the FTA Capital Investment Grants Program was subject to improper payment testing. Subsequently, OMB advised the Department to proceed with model development for nationwide testing in FY 2008.

2. SAMPLING PROCESS AND RESULTS

In FY 2007, the Department continued implementing the IPIA, which requires that agencies: (1) review programs and identify those susceptible to significant improper payments (2) report to Congress on the amount and causes of improper payments and (3) develop approaches for reducing such payments.

In FY 2007, the Department successfully completed its review of the FHWA Federal-aid Highway Program, FAA Airport Improvement Program, and the FTA Formula Grants Program. With respect to the Formula Grants Program, as described below, successful completion pertains to approximately one-third of the grantees. In addition, the Department developed and tested a model for determining the amount of improper payments in the FTA Capital Investment Grants Program.

In FY 2007, the Department re-engaged AOC Solutions, Inc. to develop the nationwide sampling plan, collect the results from the application of test procedures, and provide a nationwide estimate of improper payments for Federal-aid Highway Program, Airport Improvement Program, and Formula Grants Program. With respect to the Formula Grants Program, the sampling plan, test procedures, and test results only apply to approximately one-third of the grantees covered by the FTA's Formula Grant Triennial Review Program. 49 U.S.C. 5307 prescribes a Triennial Review of all Formula Grant grantees. OMB Circular A-123, Attachment C, paragraph F, provides for alternative approaches, including determining the amount of improper payments for components, such as those addressed in the foregoing statute.

In addition, AOC developed and tested a model for determining the amount of improper payments in the FTA Capital Investment Grants Program. The Department will apply the model on a nationwide basis to the Capital Investment Grants Program in FY 2008.

The samples designed to execute the model are of sufficient size to yield an estimate with a 90 percent confidence interval within +/- 2.5 percent points around the estimate of the percentage of erroneous payments, as prescribed by OMB. The results of these efforts are discussed below.

FEDERAL-AID HIGHWAY PROGRAM

The Department developed and executed a sampling plan to test project payments and estimate the amount of improper payments nationwide.

The FHWA executed the nationwide testing program using personnel from the FHWA division offices and covered Federal payments to grantees over the twelve-month period March 1, 2006 through February 28, 2007.

The sampling plan involved a multi-staged statistical approach that included the selection of 53 Federal payments, 40 state payments, and then 230 testable line items from those payments for testing. The 2007 sample size is significantly less than the 2006 sample size because of a change in objectives. In 2006, the Department wanted to ensure all 50 states and two territories received sample items for testing. This required a substantially larger sample that would have been required had the Department not required that all states and territories receive sample items. In 2007, the sample was designed to support a nationwide estimate of improper payments and was not designed to provide sample items to all states and territories. The states that did not appear in the IPIA sample received sample items for FIRE testing.

The test procedures applied to the line items were designed to test a range of administrative elements and contractual elements. Tests of administrative elements included determining whether payments were properly approved, billed at the correct federal participation rate, and whether billings and payments were mathematically accurate. Tests of contractual elements included determining whether payments were in accordance with contract rates/prices for specified materials and whether material quality tests indicated that materials met contractual requirements.

Improper payments totaling $45,568 were found in the sample of 230 tested items. The projection of this result to the population of program payments for the twelve-month period results in an improper payment estimate of $55.2 million +/- $0.5 million. This projection does not meet OMB's definition of significant improper payments ($10 million and 2.5 percent of total program payments).

The improper payments reported resulted from factors such as unallowable charges, insufficient supporting documentation, incorrect calculations, and duplicate payments. The FHWA has implemented its Financial Integrity Review and Evaluation (FIRE) program to monitor State and Territory payments and provide a mechanism for assisting these entities with effectively addressing operational issues that result or could result in improper payments.

FTA FORMULA GRANTS PROGRAM

FY 2007 was the first year of nationwide coverage of the FTA Formula Grants Program. In FY 2006, the FTA developed and tested a model used for use in IPIA testing in 2007. The FTA developed and executed a sampling plan to determine the amount and cause of improper payments in the Formula Grants Program and to assist the FTA in incorporating the IPIA test procedures in its statutorily required Triennial Review Program.

The FTA executed the nationwide testing program for grantees covered by the 2007 Triennial Review Program using contractor personnel. The review covered the twelve-month period March 1, 2006 through February 28, 2007.

The sampling plan involved a multi-staged statistical approach that included the selection of 60 Federal payments, 30 transportation authorities' payments, and then 169 testable line items from those payments for testing. The test procedures applied to the line items were designed to test a range of administrative elements and contractual elements. Tests of administrative elements included determining whether payments were properly approved, billed at the correct federal participation rate, and whether billings and payments were mathematically accurate. Tests of contractual elements included determining whether payments were in accordance with contract rates/prices for specified materials and whether material quality tests indicated that materials met contractual requirements.

Improper payments totaling $2,326.16 were found in the sample of 169 tested items. The projection of this result to the population of program payments for the twelve-month period results in an improper payment estimate of $2.77 million +/- $0.03 million. This projection does not meet OMB's definition of significant improper payments ($10 million and 2.5 percent of total program payments).

The improper payments reported resulted from factors such as miscalculated federal participation share and lack of supporting documentation.

FTA CAPITAL INVESTMENT GRANTS PROGRAM

In FY 2007 the FTA developed and tested an improper payment test model at one recipient of Capital Investment Grants Program funding. The FTA patterned the model on the model developed for the FTA Formula Grants Program in 2006.

The test model involved developing test workbooks with test criteria and procedures. The sampling plan involved a multi-staged statistical approach that included the selection of 17 Federal payments, 49 grantee payments, and then 83 testable line items from those payments for testing. The test procedures applied to the line items were designed to test a range of administrative elements and contractual elements. Tests of administrative elements included determining whether payments were properly approved, billed at the correct federal participation rate, and whether billings and payments were mathematically accurate. Tests of contractual elements included determining whether payments were in accordance with contract rates/prices for specified materials and whether material quality tests indicated that materials met contractual requirements.

Improper payments totaling $361,691.73 were found in the sample of 83 tested items. The projection of this result to the population of program payments for the twelve-month period results in an improper payment estimate of $0.55 million +/- $0.39 million. This projection applies only to the single grantee and does not apply nationwide.

The improper payments reported resulted from draw-downs in excess of federal participation share.

The FTA will apply the model on a nationwide basis in FY 2008 in order to meet the requirements of the IPIA.

FAA AIRPORT IMPROVEMENT PROGRAM

The FAA developed and executed a sampling plan to determine the amount and cause of improper payments in the Airport Improvement Program. The FAA review covered the twelve-month period March 1, 2006 through February 28, 2007.

The sampling plan involved a multi-staged statistical approach that included the selection of 50 Federal payments, 30 sponsor payments, and then 95 testable line items from those payments for testing. The test procedures applied to the line items were designed to test a range of administrative elements and contractual elements. Tests of administrative elements included determining whether payments were properly approved, billed at the correct federal participation rate, and whether billings and payments were mathematically accurate. Tests of contractual elements included determining whether payments were in accordance with contract rates/prices for specified materials and whether material quality tests indicated that materials met contractual requirements.

The review found administrative and contractual compliance as addressed in the test model and no improper payments.

3. CORRECTIVE ACTION PLANS FOR REDUCING THE ESTIMATED RATE OF IMPROPER PAYMENTS.

FHWA FEDERAL-AID HIGHWAY PROGRAM

FHWA Division Offices listed the following reasons for the improper payments identified as a result of the IPIA review: Data entry errors, missing approvals, incorrect cost allocations, payments for missing field office equipment, unallowable charges, materials received not in accordance with contract terms, and source documentation not supporting payment amounts.

The Department and the FHWA will implement fully the FHWA's FIRE program in FY 2007 to monitor State and Territory payments and provide a mechanism for assisting these entities with addressing effectively operational issues that result or could result in improper payments. The Department believes that this proactive approach will establish internal control mechanisms for both preventing and detecting improper payments through effective oversight and outreach, the latter being intended to assist grantees in improving program management.

FTA FORMULA GRANTS PROGRAM

The FTA plans on adapting its statutorily required Triennial Review Program to include procedures to test for improper payments. This program will focus not only on determining the amount and causes of improper payments in the future.

In addition, the FTA will advise grantees of actions needed to ensure reimbursement requests are in accordance with grant cost sharing or matching requirements and that all transactions are supported properly prior to submission of reimbursement requests. Finally, the FTA will assess the feasibility of follow-up actions to assess the extent to which grantees covered by the 2007 review are addressing deficiencies that resulted in improper payment determinations.

FTA CAPITAL INVESTMENT GRANTS PROGRAM

Since the effort to date has been on IPIA model development and testing, the Department and the FTA have no nationwide statistics on the amount and rate of improper payments for this program. The objectives of the FY 2007 effort were to develop the model and field test it to assist the FTA in fully implementing the IPIA requirements for this program in FY 2008. The FY 2007 model development and testing effort was not designed to provide a nationwide or program-wide estimate of improper payments. However, in FY 2008, this test model will be executed nationwide for this program.

While the FTA's efforts on the Capital Investment Grants Program were limited, the FTA will advise grantees of actions needed to ensure reimbursement requests are in accordance with grant cost sharing or matching requirements and that all transactions are supported properly prior to submission of reimbursement requests. Once the FTA completes nationwide testing in FY 2008, it will assess the feasibility of follow-up actions to assess the extent to which grantees are addressing deficiencies, if any, that result in improper payment determinations.

4. DEPARTMENT ACCOMPLISHMENTS IN GRANT PROGRAMS

The Department completed the development and testing of models for determining the amount and rate of improper payments in its major grant programs. The FHWA review of the Federal-aid Highway Program, FTA Formula Grants Program, and FAA Airport Improvement Program represented nationwide application of an innovative research and develop strategy implemented in FY 2005 and updated in FY 2006. This methodology successfully resolved a limitation of prior year efforts examining federal outlays to primary recipients. As discussed above, a methodology model that reached grantee level data in the FTA Capital Investment Grants Program was developed and field tested in FY 2007. This model will be rolled-out nationwide in 2008.

5. IMPROPER PAYMENT ESTIMATED ERROR RATES, DOLLAR ESTIMATES, AND OUTLOOK
Program PY CY1 CY+1 CY+2 CY+3
Outlays IP % IP $ Outlays IP % IP $ Est. Outlays IP % IP $ Est. Outlays IP % IP $ Est. Outlays IP % IP $
FHWA: Federal-aid Highway Program 32,883 .247 30.15 33,347 0.2 55.2 37,140 N/A N/A 39,300 N/A N/A   N/A N/A
FTA: Formula Grants Program2 N/A N/A N/A 6,2813 0.3 4.32 5,700 N/A N/A 5,700 N/A N/A   N/A N/A
FTA: Capital Investment Grants Program4 N/A N/A N/A 2,663 1.1 .6 2,800     2,800          
FAA: Airport Improvement Program N/A N/A N/A 3,874 N/A N/A 3,967 N/A N/A 4,075 N/A N/A   N/A N/A
  1. Dollars are in millions
  2. Results for the FTA Formula Grants Program applies only to approximately one-third of the grantees as described in Section 2 above.
  3. Outlays for grantees covered by 2007 IPIA testing and upon which the FTA Formula Grants program IP % is based, approximates $1.2 billion.
  4. CY statistics for the Capital Investment Grants program pertain only to a single grantee and, accordingly, are not projectible nationwide.
6. RECOVERY AUDIT RESULTS

The recovery auditor, Horn and Associates, has continued working to identify overpayments and other areas of weakness. They have been granted access to our financial system to review payment records and have been tightly integrated into our existing business processes with minimal disruption or cost to the government.

To date, the recovery auditor has not uncovered any chronic problems with DOT's business processes and procedures. They are currently in the process of reviewing duplicate payments, prompt payment interest paid in error, sales tax on utility billings and open credits on statement. The chart below depicts their findings to date:

Agency
Component
Amount Subject
to Review for
CY Reporting
Amounts
Identified
for Recovery
Amounts Identified/
Amounts Reviewed
Amounts
Recovered
CY
Amounts
Recovered
PY
OST 2,846512,015 65,751,781 68,961 0 0
FAA 9,528,068,552 150,219,554 4,739,975 1,111,618 45,109
FHWA 2,343,398,062 218,995,827 340,622 10,000 0
FMCSA 182,705,574 5,740,338 97,273 0 0
FRA 5,815,740,923 922,035,393 72,384 0 0
FTA 327,017,797 10,908,847 563,769 0 0
MARAD 2,014,025,448 48,528,867 568,010 0 0
NHTSA 1,857,952,895 5,920,159 68,796 68,796 0
OIG 42,465,487 415,809 0 0 0
PHMSA 28,261,569 4,021 0 0 0
RITA 19,823,586 13,337 0 0 0
STB 1,259,489 10,832 27,112 27,112 0
TOTAL $25,007,231,396 $1,428,544,765 $6,546,901 $1,217,525 $45,109
7. DEPARTMENT PLANS FOR ENSURING MANAGERS ARE HELD ACCOUNTABLE FOR REDUCING AND RECOVERING IMPROPER PAYMENTS

Departmental management continues to take an active role in ensuring that agency managers are held accountable for reducing and recovering improper payments. The Deputy CFO has taken the lead in this initiative and is heavily involved in the daily decisions of the program. Additionally, the Department's CFO has taken a role in advocating the program.

On a monthly basis, the Department's top financial officers are briefed on the status of improper payment initiatives. Additionally, monthly reports are distributed to all levels of the Department outlining the work of the recovery audits.

To date, there have been no significant improper payments identified. If improper payments are found, the Office of the Secretary/Office of Financial Management will work with the organization to ensure that reduction targets and recovery rates are established.

8. INFORMATION SYSTEMS AND INFRASTRUCTURE REQUIREMENTS TO REDUCE IMPROPER PAYMENTS

The Department is completing full implementation of the IPIA and at this point has not identified a need for any additional systems and infrastructure requirements.

9. DESCRIBE THE STATUTORY OR REGULATORY BARRIERS WHICH MAY LIMIT THE AGENCIES' CORRECTIVE ACTIONS IN REDUCING IMPROPER PAYMENTS AND ACTIONS TAKEN BY THE AGENCY TO MITIGATE THE BARRIERS' EFFECTS.

The Department has not identified any statutory or regulatory barriers that limit its corrective action efforts.