Evaluation

  • USAID Releases the Evaluation Policy Five-Year Report

    Learn More
  • USAID’s Evaluation Policy sets ambitious standards for high-quality, relevant and transparent evaluations.

    Learn More
  • Learn more about latest available information on evaluation at USAID.

    Learn More
  • USAID evaluations are publicly available on the DEC. Click here to learn more.

    Learn More

With the release of the Evaluation Policy in January 2011, USAID made an ambitious commitment to rigorous and quality program evaluation – the systematic collection and analysis of information to improve effectiveness and inform decisions about current and future programming. The Evaluation Policy was updated in September 2016 to reflect revisions to USAID’s ADS Chapter 201 on Program Cycle Policy, the Agency's operational model for planning, delivering, assessing and adapting development programming in a given region or country to advance U.S. foreign policy.

Since the release of the Evaluation Policy, USAID has:

  • Increased the number of evaluations commissioned each year to an average of more than 200 per year, totaling more than 1,000 evaluations since 2011.
  • Provided formal training in evaluation to more than 1,600 USAID staff.
  • Improved the quality of evaluations by ensuring that planning happens in advance, using the best methods for answering a focused set of questions and encouraging that evaluations be conducted by external experts.
  • Reported transparently on evaluation findings, particularly by sharing evaluation reports online at the Development Experience Clearinghouse.
  • Used evaluation findings to inform project design, make mid-course corrections, increase knowledge and learning in specific sectors.
  • Strengthened program monitoring so that evaluations can focus on a more complex set of questions beyond whether a project is meeting its targets.

Evaluation Policy Implementation

The reports below summarize evaluation requirements and practices at USAID before and after the Evaluation Policy, major accomplishments during the first five years of implementation, and priority activities to support policy implementation moving forward.


Evaluation Quality and Utilization

Relevant and high-quality evaluation is an important tool to track the progress, results and effectiveness of international development programs. Evaluation can help explain why programs are succeeding or failing, and can provide recommendations for how best to adapt to improve performance. Along with monitoring, evaluation contributes evidence to improve strategic planning, project design and resource decisions, and they are part of a greater body of knowledge and learning.

To better understand whether these and other efforts are working, the Bureau for Policy, Planning and Learning commissioned independent studies to examine evaluation quality (2013) and evaluation utilization (2016) at USAID. These two studies found there has been an increase in the quality and use of evaluations. The recommendations in these studies will inform ongoing evaluation improvement efforts.


Learning

USAID integrates into its work a strong emphasis on strategic collaboration, continuous learning and adaptive management – Collaborating, Learning and Adapting (CLA). CLA can be instrumental in helping to create the conditions for development success by:

  • facilitating collaboration internally and with external stakeholders;
  • feeding new learning, innovations and performance information back into the strategy to inform funding allocations, program design and project management;
  • translating new learning, as well as information about changing conditions, into iterative strategic and programmatic adjustments; and
  • catalyzing collaborative learning, systemic analysis and problem solving among developing country citizens and institutions to develop and implement programs that are more effective at achieving results.

CLA includes systematically generating and sharing knowledge about how best to achieve development outcomes through well-designed and executed projects and using that knowledge to inform decisions, adapt ongoing projects and improve the design of future projects.

USAID explores and implements approaches to intentionally embed learning through its programming. For more information about how USAID approaches learning, please visit USAID's Learning Lab, the Agency’s platform for generating and sharing information, tools and resources on how development practitioners can work together to integrate learning throughout USAID’s Program Cycle. Here, USAID staff and partners jointly create, share, refine and apply practical approaches to more effectively ground programs in evidence and quickly adapt based on new learning and changing contexts, thereby maximizing development outcomes.

For examples of how USAID has used evaluations to learn from and inform its work, please see the following case studies:


Monitoring and Evaluation Resources

For a complete list of USAID’s public evaluations, visit the Development Experience Clearinghouse.

  • Evaluation Toolkit - Curates the latest USAID guidance, tools and templates for initiating, planning, managing and learning from evaluations primarily for USAID staff and partners involved in any phase of the evaluation process.
  • Performance Management Plan (PMP) Toolkit - Designed to serve as an ongoing resource for USAID staff and partners engaged in performance management roles as they plan for and manage effective performance monitoring and evaluation over the course of the Mission’s strategy.

Last updated: November 16, 2016

Share This Page