The Evaluation Plan section of a PMP identifies, summarizes all evaluations as they are planned across the Mission and over time by DO. Evaluations that address strategic-level concerns are best planned during the CDCS – PMP development period, as are impact evaluations that will span most of the strategy period, a multi-component whole-of-project evaluation. Mid-course and final evaluations for existing activities should also be included in a PMP. The PMP Evaluation Plan section can also be amended during the strategy period to add further evaluations, particularly if new information arises indicating that an evaluation would be appropriate for accountability or learning purposes.
For each anticipated evaluation, ADS 201 states that the following information is to be provided as it becomes available:
The most substantive of these requirements, and thus the two that are likely to require the greatest consideration by DO teams are those involving the type of evaluation to be undertaken, on the one hand, and the one dealing with possible evaluation questions, which is closely linked to an evaluation’s purpose on the other.
Informally, USAID estimates that roughly 90 percent of the evaluations it conducts are performance evaluations, with impact evaluations accounting for the remaining 10 percent. With the performance evaluation cluster, evaluations can be broken down further by their timing and scope. Evaluation timing is often driven by the purpose of an evaluation, with mid-course and final evaluations of projects and activities each accounting for about half of all performance evaluations. Ex-post evaluations are possible, but have historically been few in number. Whole-of-project evaluations, for large multifaceted projects are also relatively rare; they are more common for smaller projects which are funded through a single mechanism, and examined, as a whole, in a final performance evaluation. It is useful to consider the full range of timing and scope options for evaluations a PMP identifies, as this information will help Mission staff determine the likely start and end dates for its anticipated evaluations.
In order to respond to ADS 201’s requirement to identify anticipated evaluations by type in PMP, the Mission must determine which of the evaluations it is considering for various DOs should be undertaken as a rigorous impact evaluation. As explained in the on the Anticipating Evaluation Needs age in the CDCS stage of the Program Cycle:
Impact Evaluations measure the change in a development outcome that is attributable to a defined intervention. Impact evaluations are based on models of cause and effect and require a credible and rigorously defined counterfactual to control for factors other than the intervention that might account for the observed change.
Missions and other Operating Units (OUs) must conduct an impact evaluation, if feasible, of any new, untested approach that is anticipated to be expanded in scale or scope through U.S. Government foreign assistance or other funding sources (i.e., a pilot intervention). (If it is not feasible to effectively undertake an impact evaluation, the Mission or other OU must conduct a performance evaluation and document why an impact evaluation wasn’t feasible.)
Factors to consider when making an Impact Evaluation Decision, whether in time for it to be designated as such in a PMP, or if an impact evaluation decision is made at a later date, are discussed on the following page. Additional information on Impact Evaluation Designs and related topics is provided in later sections of this website.
Evaluation questions, while always specific to a particular DO and the context in which USAID is trying to achieve that DO, often fall into broad categories that can be used during PMP development as reminders about issues that might be important for a particular Mission to consider.
Another approach, offered by the OECD's Development Assistance Committee (DAC), suggests brainstorming possible questions for specific evaluations in clusters that represent different types of outcomes, including:
Other evaluation scholars, including Michael Quinn Patton, author of Utilization-Focused Evaluation, and Patricia Rogers, encourage evaluation clients and evaluators to focus first on who the users will be and what it is they want to know. Ms. Rogers presentation for USAID on this approach is featured on this page.
In addition to these approaches, USAID’s Evaluation Toolkit includes provides a brief guide for developing appropriate questions, entitled Tips for Developing Good Evaluation Questions (for Performance Evaluations), which includes a mix of principles and tips for developing good evaluation questions.
Many of the possible questions that evaluations under specific DOs might be asked to address can and should be framed with gender as a dimension of interest. When framing evaluation questions, it is also important focus on what information monitoring will provide, and avoid listing questions that would simply duplicate those efforts. In addition, the appropriateness of an evaluation question is often a function of timing. Process questions about implementation and partner relationships may be more useful to include in mid-term than in final evaluations, while questions about sustainability might only be answered definitively in an ex-post evaluation conducted after a project terminates.