A Statement of Work (SOW) is viewed as the single most critical document in the development of a good evaluation. The SOW is important because it is a basic road map of all the elements of a well-crafted evaluation. It is the substance of a contract with external evaluators.
Ideally, the elements of an Evaluation SOW, when considered together, include an appropriate balance between questions, time and resources. A 2011 PPL/LER review of recent evaluations highlighted characteristics of evaluations that met USAID evaluation standards as well as those that did not. Evaluations that failed to meet those standards were often those that included too many questions overall and in relation to the resources provided to conduct them. While USAID does not prescribe a specific format for developing an Evaluation SOW, its ADS 201’s description of SOW elements comes very close to identifying a sample evaluation SOW outline, which is provided below.
As this outline suggests, USAID structures an Evaluation SOW around the set of questions the evaluation will be expected to answer, rather than around a set of objectives or topics. Questions included in an evaluation SOW may originate in a CDCS or Project Design, or they may emerge later, once a project is underway. Throughout an evaluation SOW it is essential that gender be considered and integrated into evaluation planning. This is particularly true for evaluation questions which should elicit information on gender differential participation in and benefits from projects, as well as provide answers separately for men and women, as well as for all participants or beneficiaries.
The question and answer exchange between an Evaluation SOW and the report generated in response to it is that an Evaluation SOW sets up or reflects widely accepted “best practices” for this such studies. SOWs are important for Performance Evaluations as well as for Impact Evaluations. There are some differences between SOWs prepared for these two types of evaluations, but both can be prepared using the outline above. Major differences between them will generally focus on the duration of the evaluation and the range of questions they address, with Impact Evaluations tending to focus on a narrower list of questions, with evidence of causal relationships and measureable effects being paramount.
This resource box includes a number of SOWs developed for previous trade-related evaluations. Several of them include more questions than the "small number" USAID guidelines recommend, but are otherwise useful references.
To facilitate the preparation of Evaluation SOWs, an interactive Evaluation SOW Outline Template, that follows this outline and indicates what is to be covered under each heading, is provided in this section of the kit. This kit section also includes USAID’S Evaluation SOW Review Checklist, which those who are involved in preparing evaluation SOWs are encouraged to consult as a “self-check” on the completeness of their SOWS. From time to time, USAID has reviewed evaluation SOWs using a version of the checklist provided in this kit, including one review that examined a sample of trade evaluation SOWs, which is provided in summary form on this kit page. The results of reviews of this sort consistently suggests that use of the checklist as a guide by staff who are developing SOWs would be likely to improve their average quality.
Experience suggests that the two aspects of an evaluation SOW that USAID staff find the most challenging to develop are the section on evaluation methods, which identifies approaches that are appropriate for collecting and analyzing data on an evaluation question by evaluation questions basis, and the section on evaluation budget, which should, in principle, be constructed based on USAID’s methodological choices. Further, despite direction in the ADS to include an evaluation budget in an SOW, USAID SOWS in RFPs and IQC task order requests have not, in the past, consistently followed this instruction.
To support an Evaluation Manager’s efforts to identify appropriate evaluation methods for each evaluation question, particularly for Performance Evaluations, a Getting to Answers Matrix can be a useful tool. In addition to using this tool to generate its own ideas about appropriate methods, USAID can ask evaluation teams to respond with their own ideas using the same format. In addition to this matrix for identifying evaluation methods, this kit page includes a reading that walks through an example of the process for developing an evaluation budget once an evaluation design and methods have been selected.
Two volumes featured on this kit page provide additional guidance on the development of an Evaluation SOW. USAID’s How To Note – Evaluation Statements of Work focuses on the processes involved, including involving country partners and other evaluation stakeholders in discussions about the questions it would be useful to have an evaluation address.
The second volume, Evaluation Statements of Work: Good Practice Examples provides a least one good example from a real USAID evaluation SOW of how each element of the outline above can be developed.