You are here

Monitoring Evaluation Quality Over Time

USAID uses various methods to help ensure that USAID evaluation reports, and the evaluation SOWs that generate these reports, are of high quality, including peer reviews undertaken as these product are developed and PPL/LER organized technical audits and random checks on policy compliance. Missions and other operating units can also monitor and work to improve evaluation quality using simple meta-evaluation tools.

An important goal of USAID’s evaluation policy is to raise the quality of evaluations on a Mission-wide basis, or for all trade or economic growth evaluations across a region or bureau. Monitoring the quality of clusters of evaluations, or the SOWs used to produce clusters of evaluations, is fairly simple and the results, even over the course of year, can provide USAID operating units with important feedback on their strengths and weaknesses.

Most systems for monitoring evaluation quality – or meta-evaluation as this is often called – involve the use of checklists that capture some, if not all, important characteristics of SOWs or evaluation reports being produced by an operating unit. To conduct a meta-evaluation, either by looking backwards over a stack of evaluations that has accumulated, or to monitor evaluation quality on a running basis, scoring each evaluation as it appears, adding the scores to a simple Excel data base, and watching trends emerge over time, Missions already have the basic tools.

  • To monitor the quality of evaluation SOWs over time, Missions could use either the USAID Evaluation SOW Review Checklist include in this section of the kit, which is provided to staff who take USAID’s evaluation courses, or a simpler version created by selecting some of the items from this checklist, or making a very basic checklist out of the list of elements of a SOW provided in ADS 203.3.1.5.
  • To monitor the quality of evaluation reports, the USAID Evaluation Report Review Checklist included in this section, and provided in USAID’s evaluation training courses, could be used, or a very basic checklist can be prepared based on ADS 203.3.1.8.
  • Finally, if a USAID operating unit had a number of USAID Impact Evaluations that involve control groups that it wanted to check for quality in relation to each other, there is a Checklist for Reviewing a Randomized Controlled Trial that was developed based on a U.S. Office of Management and Budget paper on evaluation designs that produce strong evidence that could be used or simplified for this purpose.

When a checklist approach to meta-evaluation is used, the results can be visually displayed in a way that quickly informs users about patterns across a set of evaluations – what checklist items are consistently handled well in Mission products, and which checklist items are not always in line with USAID expectations, as the highly simplified illustration of a checklist based meta-evaluation below demonstrates. Meta evaluation checklists of this sort have been used internally by PPL/LER (and its predecessors) off and on over the past several decades to understand evaluation quality. Missions and other operating units can also monitor the quality of their evaluation reports, and evaluation SOWS, in this way.

Evaluation Checklist Meta-Evaluation Items Mission Evaluation Report (ER) Checklist Scores for 2013 Sum of Correct Scores
ER #1 ER #2 ER #3 ER #4 ER #5 ER #6 ER #7 ER #8 ER #9
2-3 page executive summary mirrors contents of report                   7
Evaluation methods and instruments are included                   6
Study limitations are explicitly stated                   7
All evaluation questions are addressed                   7
The findings sections makes clear which methods data came from                   4
Findings-Conclusions-Recommendations progression is easy to follow                   6
Recommendation are specific as to action and who is to take the action                   5