You are here

Anticipating Evaluation Needs

Identifying key evaluation questions at the outset improves the quality of the program design and guides data collection during implementation.

USAID's CDCS guidance focuses a substantive as well as budgetary spotlight on evaluation, which, together with learning, is a key element of the USAID Forward, the Agency's ambitious reform initiative. In a CDCS, Missions are expected to identify:

  • High priority evaluation questions for each DO that could be addressed over the CDCS period.
  • At least one opportunity for an impact evaluation of a project or project component within each DO.

Identifying an “opportunity” for an impact evaluation does not constitute an obligation to conduct a rigorous evaluation of the specific project component or intervention identified. Rather, it is the first step in a Mission process for identifying interventions linked to pilot and other types of projects and activities that are innovative and would benefit from or be subject to USAID’s requirement to conduct impact evaluations were untested hypotheses are involved.

Evaluation questions, while always specific to a particular DO and the context in which USAID is trying to achieve that DO, often fall into broad categories that can be used during CDCS development as reminders about issues that might be important for a particular Mission to consider. USAID's CDCS Guidance includes sample evaluation questions and provides a list of aspects of programs to consider when identifying high priority questions for future evaluations:

  • The development hypotheses and key assumptions underlying the program
  • Estimating program impact
  • The policy approach in a specific sector
  • The efficiency of USAID's implementation approach — with particular attention to program costs.

The validity of the development hypotheses embedded in a program strategy as an evaluation focus is particularly important, as this is something that cannot be determined from performance monitoring alone. A parallel list, issued in the OECD's Development Assistance Committee (DAC) evaluation guidance may also be useful to consider. It includes: relevance, effectiveness, efficiency, impact and sustainability, and sample questions under each of these topics are provided.

Beyond this range of questions there may be other reasons for considering specific questions to be in the “high priority” category, including questions that relate to scaling-up a program or are of particular interest to legislators or other high level stakeholders.

Many evaluation questions across this spectrum can and should be framed with gender as a dimension of interest. When framing evaluation questions, it is also important focus on what information monitoring will provide, and avoid listing questions that would simply duplicate those efforts. In addition, the appropriateness of an evaluation question is often a function of timing. Process questions about implementation and partner relationships may be more useful to include in mid-term than in final evaluations, while questions about sustainability might only be answered definitively in an ex-post evaluation conducted after a project terminates. When considering evaluation questions it is also worth remembering that USAID's Evaluation Policy requires that:

  • All large program and projects under a DO be evaluated, meaning those above average size for the Mission's portfolio, and
  • Impact evaluations are required for innovative development interventions, as well as any activity within a project that involves and untested hypothesis.

For a Mission strategy that includes a focus on trade performance as a mechanism for stimulating economic growth, evaluation questions might consider both the development hypothesis involved and the program's outcomes in target regions as well as for specific population groups. Illustrative columns a CDCS Evaluation Questions Template included in this kit for your use are shown below.

CDCS DO High Priority Evaluation Questions
DO 1 Broad-based economic growth enhanced To the extent that growth indicators improved over the CDCS period, what factors appear to have been the most important drivers of these improvements?
What effects of improvements in trade performance promoted by USAID's strategy are discernable in the relatively poorer districs in the country on which USAID focuses?
What have been the differences between men's and women's participation in and benefits from USAID's focus on trade performance in the CDCS economic growth strategy?