In a speech early in his tenure, USAID Administrator Shah made a commitment to invest in meta-analysis as a learning tool at the Agency level when he said "we will begin hosting a regular series of summits that we’re calling evidence summits, to study our own actions and explore real ideas for improvement….." USAID has since held several such summits, including one in 2010 on "Promoting Broad Based Growth."
Learning from and using evaluations, studies tell us, happens in two ways. The first involves immediate, or instrumental use, i.e., we make adjustments in a project when an evaluation's evidence and recommendations are practical and compelling. The second type of use occurs over time and often depends upon findings from more than one evaluation. Conceptual use is less clearly linked to immediate action. More likely the cumulative evidence from a number of evaluations will lead to changes in the design of field programs, or to changes in Agency policy.
USAID has a long history of synthesizing the findings from sets of evaluations, along with other relevant research, and using the results to guide program development. That process is exactly what USAID calls for in its CDCS and Project Design guidance as discussed in Locating Evaluations and Summarizing Evaluation Evidence earlier in this kit. Meta-analyses vary in scale, including at the country level, for a number of evaluations in a particular that need to be reviewed prior to the development of a new strategy.
At a broader level, USAID evaluation syntheses of evaluation findings, or meta-evaluations, have been an important element of USAID's evaluation practice for over four decades, with the earliest of these studies undertaken in support of an early predecessor to USAID's Evidence Summits, which are strengthening Agency learn on important topics including "Promoting Broad-based Economic Growth", "Counterinsurgency and Counterterrorism", "Community Health Worker Performance" and "Strengthening Country Systems" which is highlighted on this page and USAID's website.
In addition to meta-analyses, which synthesize the substantive findings of previous evaluations, there is a second and more rigorous approach to conducting an evaluation synthesis. This process, called a Systematic Review, is normally used only with impact evaluations that involve a counterfactual and the randomized assignment of individuals or locations to treatment and control groups. A Systematic review not only extracts the main findings from a set of evaluations or research studies, it also rates the quality of the evidence that stands behind those findings. Domestically, Systematic Reviews are conducted for randomized controlled trials in the health field by the Cochrane Review, and are undertaken for other fields by the Campbell Collaborative. Internationally, 3ie is carrying out Systematic Reviews of evaluations of development program interventions, primarily those that involve an experimental or quasi-experimental design. The sidebar on this page highlights and leads to the 3ie repository of Systematic Reviews in the international development field, including one that looked at the impact of the U.S African Growth and Opportunity Act (AGOA).