You are here

Managing Evaluations

USAID evaluation training courses for Agency staff anticipate that whenever a decision is made to undertake a Performance Evaluation or an Impact Evaluation, someone on USAID's staff will be designated as the Evaluation Manager for that effort. This may be the Operating Unit's Evaluation Point of Contact, or it may be another staff member. This individual helps ensure that evaluations are carried out in a manner that is consistent with USAID's Evaluation Policy.

The more an Evaluation Manager understands about USAID’s evaluation process and where quality control opportunities exist within that process, the more likely it is that they will be able to ensure that the evaluations they oversee meet USAID’s standards. To this end, it is useful to conceptualize the evaluation process as consisting of a series of distinct stages, each of which includes specific steps and evaluation products. While any division of the evaluation process into stages is arbitrary, the five stages shown in the table below help to isolate blocks of time over a process that may, in total, take six months to a year or more to complete, depending on the specific evaluation. A more detailed version of the Evaluation Management Process table can be accessed from this page. To complement this five stage evaluation process table, the kit provides an Illustrative Evaluation Manager’s Checklist, which is featured on this page.

The Evaluation Management Process – Summary View
Stage 1 Stage 2 Stage 3 Stage 4 Stage 5
Decision to Evaluate to Issuance of SOW Proposal Review to Approval for Data Collection to Begin Support During Data Collection and Analysis Initial Evaluation Results Briefing to Final Report Dissemination of Final Report; Post-Evaluation Review/Decisions; Action on Evaluation Findings

One of the new elements of the USAID evaluation process is an Evaluation Dissemination Plan. This product, introduced in USAID’s Evaluation Policy is produced by USAID. An initial version is to be created before an Evaluation Statement of Work (SOW). Producing a preliminary Evaluation Dissemination Plan early in the evaluation process helps ensure that an evaluation SOW reflects USAID’s needs for evaluation products in hard and soft copies as well as oral briefings for various audiences to support evaluation dissemination and utilization.

The expanded Evaluation Management Process table also includes several steps that are described as Quality Control Checkpoints (QQC). Quality Control Checkpoints (QCC) are opportunities during the evaluation process for reviewing work completed to date with an eye towards identifying evaluation quality issues early enough to correct them. As the graphic below suggests, at least one such opportunity exist in Stage 1, 2 and 4 of the evaluation process. QQCs in this graphic, it should be noted, reflect “best practices” in evaluation, but are not checkpoints identified or referred to in USAID’s Evaluation Policy. Two of the checkpoints included in the graphic below (QCC#1 and QCC # 4) are, however, already considered to be routine steps in USAID’s evaluation process: namely a review of an evaluation SOW and a review of a draft evaluation. To strengthen its quality control at these two steps USAID has developed an Evaluation SOW Review Checklist keyed to ADS 203 and an Evaluation Report Review Checklist based on its Evaluation Policy and ADS 203, both of which can be filled out online in subsequent sections of this kit.

Two additional Quality Control Checkpoints included in the diagram above are less consistently part of the USAID evaluation process. They can easily be added by an Evaluation Manager. One of the best ways to do this is to incorporate these QQCs into an Evaluation SOW’s list of deliverables.

QQC#2 Desk Review/Inception Report and Detailed Evaluation Plan: USAID guidance stresses that evaluation should build on what is being learned from performance monitoring and that before an evaluation team begins its field work there is a well developed evaluation design/methodology in place including instruments and a data analysis plan. This QQC is intended to make sure that these steps are completed before the start of field work is authorized.

  1. Desk/Inception Report on What is Already Known: In the past, as indicated in USAID’s Evaluation Policy – Year One Report and other reviews of earlier USAID evaluations, teams often assembled and started their field work without first determining what portion of the answers to a set of evaluation questions already exist (or partially exist) in monitoring reports. A desk review or inception report that summarizes the answers to evaluation questions found in existing documents can help an evaluation team narrow the focus of its field work plan, focusing it squarely on evidence gaps that need to be filled to answer each evaluation question in its SOW.
  2. Detailed Evaluation Plan: In addition to, or as part of an inception report, the evaluation team can be expected to provide USAID with a more detailed evaluation plan and set of instruments than was provided in its proposal. It is not unusual for the members of an evaluation team to have been assembled after a Task Order or contract for an evaluation has been awarded. Requiring that an evaluation team, once assembled, elaborate on the proposal stage evaluation plan they may not have written helps ensure that, before they leave for the field, they have a sufficiently detailed data collection, sampling and analysis plan to produce high quality evidence on each evaluation question—and that they are committed to executing that plan.

A requirement for USAID approval of an evaluation team’s desk/inception report and detailed evaluation plan prior to the start of field work can be written into an Evaluation SOW as a milestone. While delay at the start of an evaluation is not ideal, it may sometimes be the best way to ensure that a final evaluation product meets USAID standards.

  • QQC#3 Initial Evaluation Results Briefing: This oral briefing, supplemented with a written list of key findings, conclusions and likely recommendations for each evaluation question can be called for in an evaluation SOW at some point after field work has been completed and analysis is well underway but before the team has begun to write up the Findings, Conclusions and Recommendations section of its report. The purpose of this briefing, and the reason is it is a quality checkpoint, is to determine whether the team has gathered sufficient evidence to answer each question in their Evaluation SOW. If this briefing reveals that some questions were not addressed or that more or different types of data are still needed to make other answers complete and credible, USAID has the opportunity to reallocate a portion of the team’s remaining LOE to filling evidence gaps that have been identified. Without this step, USIAD may find that it only discovers evidence gaps when it receives the team’s draft report, and at that point the remaining LOE may not be adequate to address them.