Audit Results

Overview

The audit of the agency Open Government Plans reveals wide variation in the quality of the plans. Primary differences between strong and weak agency plans included:
  • Level of Specificity – Strong plans included deadlines and specific steps to accomplish goals, as required by the Directive; plans were weakened by including “plans to plan.”
For example, each project described in the strong National Aeronautical and Space Administration (NASA) plan contains specific explanations of goals and the context of each initiative within the larger open government initiative, as well as timelines and milestones.
  • Easy Availability of Information – Good plans included links to information that let the public know how they can access information, as required by the Directive; plans were weakened by not making information about public access easy to find.
For example, the Office of Management and Budget’s plan contains an inventory list of high value datasets, but does not provide links to these dataset or explain where the datasets can be found, and many of the items on the inventory could not be found when searching the website. Agencies should be encouraged to provide links to the inventories on their Open Government webpages and in other prominent places.
  • Thoughtful Identification of Key Audiences and Needs – Plans scored high marks for describing how they are making information available to different stakeholders in a way that is most useful for each constituency; several plan evaluations suffered because agencies failed to either identify their stakeholders with any level of specificity or explain how they will meet these constituencies’ needs.
The Social Security Administration plan benefited from consultation of a number of internal, external, customer, and collaborator stakeholders in developing their plan; this is emphasized by the breadth of audience recognized. Similarly, the Department of Education is going beyond the basic requirements to improve transparency by presenting information in multiple formats tailored to fit the needs of the agency’s stakeholders.
  • Quality and Sustainability of Flagship Initiatives – Strong plans included innovative flagships with thoughtful plans for improving and sustaining the project; plans were weakened by agencies failing to explain how the project is different from any other agency initiative to make information available and by a lack of planning for how the initiative will evolve.
For example, the Department of Labor’s flagship initiative of an online enforcement database can be improved with more discussion of how it intends to measure transparency and information use, expand the public dialogue and seek to improve initiative. The EPA could improve its laudable flagship initiatives by more specifically identifying the stakeholders to which it will reach out, or how it will identify them.

Many of the deficiencies in the plans are easily remedied. As many agencies refer to their plans as “first drafts” and “living documents,” we hope agencies will revise their plans in the near future to address these shortfalls.

Indeed, as noted above, some agencies already offered version 1.1 of their plans. We applaud this type of agency initiative and believe it embodies the spirit of the OGD.

Methodology

This audit was completed between April 12 and April 23 using new media tools that involved a number of evaluators from multiple organizations. The criteria for assessment were developed based on the requirements in the OGD. For each item assessed, evaluators gave a score of 0 if the plan did not address the item, 1 if the plan referenced the item, but did not include a clear roadmap to achieve results, and 2 is the plan fulfilled the requirement. The maximum basic score was 60, except for agencies without original classification authority, where the maximum was 58 points.

Evaluators could also give an agency a bonus point for exceeding the requirement on each item. The highest number of bonus points received by any agency was 18. Bonus points were added on to the basic score, as a sort of extra credit. On several occasions the evaluators discussed how the scoring would be done for each item to be evaluated, particularly for the bonus point.

Evaluators used Google Docs to record their assessments. Evaluators were able to look at all assessments to compare with their own.

For more information about the methodology, click here.

Rankings

The audit reveals wide variation in the strength of plans. Leading agencies developed plans that exceeded the requirements of the OGD in important and innovative ways. Final rankings are based on the overall score the plan earned (including bonus points) out of the agency’s basic score. To view the final rankings, click here. To view each assessment of the agency plans, click on the agency on the left hand side of this webpage.

No plan fulfilled all of the requirements of the OGD, but eight agencies created plans that stand out for their overall strength: NASA, the Department of Housing and Urban Development (HUD), the Environmental Protection Agency (EPA), the Office of Personnel Management (OPM), the Department of Transportation (DOT), the Nuclear Regulatory Commission (NRC), and the Department of Labor(DOL). These plans stand out from the others in large part because of the level of detail included in the plans: not only do they lay out a roadmap for increasing transparency, participation, and collaboration, they also describe specific milestones the public can use to hold the agency accountable for implementing the plans. Also, the plans themselves made the agencies more transparent by offering direct links to specific information.

Of the top tier plans, NASA, HUD and EPA stand out for presenting model plans. These three plans meet almost all of the requirements of the OGD, and, where they do not fulfill a requirement, at least makes some progress toward the goal. With bonus points, these agencies’ plans exceed the maximum possible points on the basic score. The bonus points awarded to these agencies recognized important and innovative initiatives like NASA’s Participatory Exploration Office, which supports research on new technologies to increase public participation, coordinates NASA-wide efforts to incorporate new participatory exploration approaches into future work, and acts as a clearinghouse for identifying and communicating best practices both internally and externally, and EPA’s plans for frequent reviews and progress updates. To join the rank of leaders, agencies must implement such leading practices.

Between the time we conducted the audit and published this report, several agencies contacted evaluators and expressed intent to address many of the weaknesses identified by the audit. The Department of Transportation, for example, has already issued version 1.1 of its plan. The revised plan will be re-evaluated and a new score will be issued in June, but it is clear that they improved on many of the deficiencies we identified in this audit. We are pleased that federal agencies want to improve their plans and will be happy to share the detailed assessments under these audits so that they can make improvements. After all, that is the end goal.

The five lowest scores went to the Department of Justice (DOJ), the Department of Energy (DOE), the Office of Management and Budget (OMB), the Department of Defense (DOD), and the Department of the Treasury. Examples of the failings that earned these agencies low scores are provided below; they are consistent with the general trends identified earlier:
  • DOJ’s Open Government Plan is written in general terms and proposes very broad actions which the agency “should” or “will” accomplish in the future, with very few timeframes provided. Although 66 ideas were submitted to the DOJ IdeaScale, including many that proposed new high value data sets that would be of great public interest, the DOJ plan does not list any currently available data sets, or any specific new high value data sets it plans to release in the future.
  • DOE ‘s plan is weakened by its failure to explicitly state how it will improve participation by revising its current practices to increase public participation, and propose changes to its internal management and administrative policies to improve participation.
  • OMB’s Open Government Plan is missing several elements required by the Open Government Directive and fails to provide sufficient information on many other criteria. OMB’s proposals for improving transparency, participation and collaboration need to provide greater detail on how it will proceed on each, including some specific milestones and deliverables, against which the public will be able to judge their performance.
  • DOD’s plan lays out very little in the way of concrete commitments. DOD could improve its plan by adding specificity (dates and milestones) to its currently described “plans to plan.”
  • The Department of Treasury’s fails to include plans and processes to meet specific OGD requirements. The plan would be greatly improved by a description of specific steps the agency will take to meet the requirements.

Of particular disappointment to many of the evaluators is the low ranking of plans developed by OMB and DOJ. Given that OMB has responsibility overseeing portions of the OGD, the evaluators expected the agency to seize this opportunity to lead by example. For instance, OMB could have easily taken this opportunity to make its new contractor accountability database – the Federal Award Performance and Integrity Information System (FAPIIS) – accessible to the public. Similarly, DOJ’s ranking at the bottom of the stack is disappointing given its charge to implement the Freedom of Information Act (FOIA), America’s oldest public access law, and Attorney General Eric Holder’s guidance to federal agencies in 2009, which stated his strong support for President Obama’s commitment to open government.
Comments