You are here

Evaluation Quality Standards

Evaluations are expected to use methods that generate the highest quality and most credible evidence that corresponds to the questions being asked, taking into consideration time, budget and other practical considerations. USAID’s Evaluation Policy and its ADS identify a number of quality characteristics on which USAID staff are encouraged to focus when planning and managing evaluations — particularly performance evaluations, where pre-policy reviews suggest there are important opportunities for evaluation quality improvement.

Quality Characteristics of USAID Evaluations Highlighted in USAID’s Evaluation Policy and ADS 201

  • An Evaluation SOW that identifies a small number of evaluation questions to be answered, all of which must be addressed with empirical evidence in the evaluation.
  • A written evaluation design, including methods, main features of data collection instruments, and data analysis plans that are shared with country-level stakeholders as well as with the implementing partners for comment before being finalized.
  • A team with the appropriate methodological and subject-matter expertise to conduct an excellent evaluation that includes, as a minimum, an evaluation specialist and an external team leader.
  • An adequate budget and timeline for a high quality evaluation.
  • Data collection and analytic methods that ensure, to the maximum extent possible, that if a different, well-qualified evaluator were to undertake the same evaluation, he or she would arrive at the same or similar findings and conclusions.
  • Application of social science methods and tools that reduce the need for evaluator-specific judgments.
  • Evaluation findings that are based on facts, evidence and data, including gender disaggregated data where appropriate as well as financial data that permits computation of unit costs and analysis of cost structure. This precludes relying exclusively upon anecdotes, hearsay and unverified opinions.
  • A limitations statement in the evaluation report, that pays particular attention to the limitations associated with the evaluation methodology (selection bias, recall bias, unobservable differences between comparator groups, etc.).
  • Clearly identified report sections that document, in relation to all evaluation questions, important findings (empirical facts collected by evaluators); conclusions (evaluators’ interpretations and judgments based on the findings) and recommendations (proposed actions for management based on the conclusions) that are action-oriented, practical and specific, with defined responsibility for the action.
  • Standardized recording and maintenance of records from the evaluation (e.g., focus group transcripts), and their inclusion, together with data collection instruments, analysis procedures, and original scope of work for the evaluation, in report annexes.