skip navigational linksDOL Seal - Link to DOL Home Page
Photos representing the workforce - Digital Imagery© copyright 2001 PhotoDisc, Inc.
www.dol.gov
November 4, 2008    DOL Home > About DOL > 2007 PAR > Reporting Performance Results

DOL Annual Report, Fiscal Year 2007
Performance and Accountability Report

Reporting Performance Results

The Performance Section of this report presents results at the Strategic Goal and Performance Goal levels. The four Strategic Goals established in our FY 2006-2011 Strategic Plan are general outcomes clearly linked to the Department's mission. Performance goals articulate more specific objectives associated with one or more programs administered by a distinct DOL agency. Progress in achieving these goals is measured by one or more quantifiable performance indicators, for which targets are established in the annual Performance Budget.

Each of the four strategic goal sections is introduced by an overview of results, net cost and future plans for its component performance goals. Results for each performance goal are presented in a brief section that includes the following:

  • Headlines describe the goal in very basic terms.
  • Goal numbers (e.g., 07-1A) start with a two-digit year corresponding to the funding (budget) period. The single digit following the hyphen identifies the strategic goal and the letter distinguishes the performance goal from others in the same group. The agency acronym (e.g., BLS) is in parentheses. Finally, we indicate whether the program is reporting on a fiscal year (FY) or program year (PY).1
  • Goal statements appear in italics.
  • Indicators, Targets and Results tables list each indicator, its targets and results for the reporting period and previous years that have data for the same indicators. Indicators that were dropped prior to the current year are not shown; however, a note indicates where additional historical performance information (legacy data) can be obtained. Where all data for any year are shown, goal achievement is indicated. Where "baseline" appears in the target cell for new indicators, no data were available for establishing a numerical target, and these data do not count towards goal achievement. If results improve over the prior year but do not reach the target, "I" appears in the target cell. Net cost associated with the goal and indicators is also provided.2
  • Program Perspectives and Logic narratives describe the purpose of the program, how its activities are designed and managed to have a positive impact on the goal, and how it measures success and external factors that influence performance. Photos and vignettes communicate examples of programs' impact at the personal level.
  • Analysis and Future Plans narratives interpret results, assess progress, explain shortfalls and describe strategies for improvement. Performance data at the indicator level and net cost at the goal level are displayed in charts where sufficient data are available to illustrate trends.
  • PART, Program Evaluations and Audits narratives provide updated information on Program Assessment Rating Tool reviews and improvement plans. Relevant audits and evaluations completed during the fiscal year are summarized in tables that highlight study purpose, major findings, recommendations and follow-up actions.
  • Data Quality and Major Management Challenges narratives discuss DOL's confidence in the performance information reported for the goal's measures and address management challenges that may have significant implications for achievement of program performance goals.3

Data Quality

This report is published six weeks after the end of the fiscal year. Since the Department uses a wide variety of performance data submitted by diverse systems and governed by agreements with State agencies and grant recipients, it is not possible in all cases to report complete data for the reporting period. The Department requires each agency responsible for performance goals in this report to submit a Data Estimation Plan in February that identifies, for each indicator, whether complete data are expected by the deadline for clearance and final review of the report in early October. If the data will not be available by then, the agencies must submit an acceptable plan to estimate results for the remainder of the year. Methodologies developed by agencies' program analysts are reviewed by the Department's Center for Program Planning and Results and the Office of Inspector General (OIG). The most common methods are substitution or extrapolation of two or three quarters of data and — for data with significant seasonal variation — use of the missing period's results from the previous year. Estimates are clearly identified wherever they are used in this report. With very few exceptions, final (actual) data are available by the end of the calendar year; these data will be reported in the FY 2008 Performance and Accountability Report.

OIG assesses the internal controls of DOL agencies — systems used to validate, verify and record data submitted by field staff and partners (e.g., grantees). These systems are identified as Data Sources at the bottom of each performance goal history. Lack of findings does not imply that data are factual.

Material inadequacies are disclosed in the Secretary's Message, which includes a statement on the adequacy of program performance data that is supported by signed attestations from each agency head responsible for a performance goal in this report. OMB Circular A-11 defines "material inadequacy" as a condition that significantly impedes the use of program performance data by agency managers and government decision makers. For Departmental management, this threshold is established at the performance goal level as data that are insufficient to permit determination of goal achievement. This is an unlikely occurrence, as most DOL performance goals have sufficient indicators and historical data to allow reasonable estimation of results. Generally, if agency or program level managers do not trust their own data, the results are not reported, because the problems created by skewed targets and trends are much worse than a gap in the data.

Because DOL aspires to maintain high standards and because performance information is being used more than ever for decision-making and accountability, DOL recently created a Data Quality Assessment process to improve the quality of performance information reported to the public. The Data Quality and Major Management Challenges section of each performance goal narrative includes an overall rating of data quality (Excellent, Very Good, Good, Fair, or Unsatisfactory). Discussions summarize the rationale for these ratings and, for all but those rated Excellent, improvement plans.

Data assessments are based on seven criteria, of which two — accuracy and relevance — are weighted twice as much as others in the rating system (see box below). If data do not satisfy the standards for both of these criteria, the rating is Data Quality Not Determined. This reflects the DOL policy that further assessments of quality are irrelevant if the information is not reasonably correct or worthwhile.

Data Quality Rating System

Both bulleted descriptions under a criterion must be satisfied to receive points. No partial credit is awarded. The rating scale reflects 20 points for Section One "threshold" criteria plus additional points earned in Section Two. Data that do not satisfy both criteria presented in Section One are given the rating Data Quality Not Determined — regardless of the points achieved in Section Two. This rating indicates the agency is unable to assess data quality because it does not meet a minimum threshold.

Section One: 20 points

Accurate: Data are correct. (10 points)

  • Deviations can be anticipated or explained.
  • Errors are within an acceptable margin.

Relevant: Data are worth collecting and reporting. (10 points)

  • Data can be linked to program purpose to an extent they are representative of overall performance.
  • The data represent a significant budget activity or policy objective.

Section Two: 25 points

Complete: Data should cover the performance period and all operating units or areas. (5 points)

  • If collection lags prevent reporting full-year data, a reasonably accurate estimation method is in place for planning and reporting purposes.
  • Data do not contain any significant gaps resulting from missing data.

Reliable: Data are dependable. (5 points)

  • Trends are meaningful; i.e., data are comparable from year-to-year.
  • Sources employ consistent methods of data collection and reporting and uniform definitions across reporting units and over time.

Timely: Data are available at regular intervals during the performance period. (5 points)

  • The expectation is that data are reported quarterly.
  • Data are current enough to be useful in decision-making and program management.

Valid: Data measure the program's effectiveness. (5 points)

  • The data indicate whether the agency is producing the desired result.
  • The data allow the agency and the public to draw conclusions about program performance.

Verifiable: Data quality is routinely monitored. (5 points)

  • Quality controls are used to determine whether the data are measured and reported correctly.
  • Quality controls are integrated into data collection systems.

Rating

Points

Excellent

45

Very Good

40

Good

30-35

Fair

25

Unsatisfactory

20

Data Quality Not Determined

Varied

DOL piloted the Data Quality Assessment process in FY 2006. By doing so, DOL not only increased the transparency of data quality among performance goals, but also implemented a forward-looking method for systematically evaluating data systems using widely accepted criteria. In its pilot year, the assessments provided a valuable baseline by identifying weaknesses and establishing improvement plans. By increasing the visibility of data quality, DOL is using the assessment process as an important benchmark for monitoring progress and stimulating change.

In this year's report, data for four performance goals are rated Excellent, ten are Very Good, six are Good, two are Fair, and two are Data Quality Not Determined. No performance goals were rated Unsatisfactory. Ratings this year largely remained the same; exceptions are higher ratings for ESA's Wage and Hour Division, Office of Federal Contractor Compliance Programs and Office of Workers' Compensation Programs goals, and lower ratings for ETA's Senior Community Service Employment Program and VETS' Uniformed Services Employment and Reemployment Rights Act goals. For two other goals, FY 2006 and FY 2007 ratings were not directly comparable due to the restructuring of performance goals. The Community Based Job Training Grants program did not report results and therefore, was not included. Given the short duration between the FY 2006 year-end pilot assessment and the FY 2007 mid-year assessment, this year's reporting focused on improvement plans to address the criteria not met in the pilot year assessment and considered the impact of any pertinent reports or audits released in FY 2007.

OIG continues to identify data quality issues among its Major Management Challenges. Central to this ongoing challenge is the Department's limited ability to ensure the quality of data reported by States and other sources below the Federal level. The Employment and Training Administration (ETA) is the principal agency affected by these findings. While their data quality assessments consistently identify verifiability as a weakness, these findings strictly relate to data collection systems for their performance goals. The OIG findings, however, cover data quality for sources not related to the performance goals. Beyond ETA, measuring the societal impact of compliance assistance, enforcement, policy development, and outreach also poses measurement challenges.

Individual agencies must find a balance between measuring activities linked to their performance goals and measuring the far-reaching benefits to their constituents. Multiple performance measures, often relying on various data collection systems, allow an agency to focus on key performance areas linked to specific strategies. It is important to recognize that the data quality rating system evaluates only those data collection systems which support performance indicators appearing in this report. Program evaluations and audit reports, such as those listed in the performance goal chapters, supplement the performance data and give agencies a more comprehensive view into the effectiveness of their programs and help identify areas for improvement.

In FY 2008, the data quality assessment process will entail full re-assessments for all performance goals. This could result in upward or downward adjustments of ratings for some goals. As data quality standards are further institutionalized and awareness of data quality increases Department-wide, DOL expects improved quality and quantity of information. It may also result in minor changes to ratings. As a testament to the robustness of the assessments to date, subsequent pertinent reports and audits generally confirmed assessment findings. This year's improvement plans focused on remedying deficiencies among data systems that are mostly rated between Good to Excellent. Nonetheless, the Department views these results as the beginning of a long-term strategy to raise the bar in data quality and performance reporting.

Planning and Evaluation at the Department of Labor

The diagram below illustrates the theoretical foundation of performance planning and evaluation structures, processes and results covered in this section of the Performance and Accountability Report. The outer circle represents the scope of DOL's resources and influence. At the core is our mission. Everything in between is in continuous motion, clockwise and counter-clockwise. Quadrants represent the planning elements that are tied to periodic budget documents. Spokes incorporate the actual processes that follow resource allocation decisions and translate theory into practice. These elements are managed on a real-time basis; emergent cost and results information ultimately closes the feedback loop via reporting documents and the next period's budget. A more detailed description of planning and evaluation processes follows the diagram.

Planning and evaluation at DOL graphic

Planning Cycle
The planning cycle begins in the upper left quadrant and moves clockwise. While planning can occur throughout the year, budget formulation officially launches the cycle. At this stage, programs define and prioritize desired outcomes by translating mere notions into realistic program goals. With clearly articulated goals in place, programs then need a mechanism for measuring their progress against those goals. Performance indicators, which appear throughout this plan, attempt to capture the results of program activities. Programs collect and monitor the data for these indicators in order to gauge progress towards their performance goals. Managers may adjust program strategies based on these results. As the budget formulation cycle nears, decision-makers can use performance data to strategically allocate resources to the most effective program strategies. Decision-makers also consider cost and which strategies will yield the maximum benefit for the least cost to the public.

Evaluation Cycle
Starting with the same quadrant but this time moving counter-clockwise, the budget defines fiscal parameters for execution of strategies constrained by program authorization legislation. Strategies materialize as activities, the results of which are assessed using performance indicators. Data from the performance indicators demonstrate whether goals are achieved. Outcomes — in generic terms, demonstrated effectiveness at achieving goals — justify further budget requests.


1FY 2007 began October 1, 2006 and ended September 30, 2007. PY 2006 began July 1, 2006 and ended June 30, 2007.
2See also DOL Program Net Costs table in Cost of Results section of the Program Performance Overview (Management's Discussion and Analysis).
3See Major Management Challenges table in Management's Discussion and Analysis.

Previous Section Next Section



Phone Numbers