You are here

Performance Evaluation Designs

Including an analysis in a Project M&E Plan of the need for evaluations during the project (tied to some threshold or key decision) and at the end of the project (either for decisions or to capture learning) lays the foundation for allocating sufficient evaluation resources and planning in a way that allows the use of the best methods for quality evaluation.

Performance evaluations are the type of evaluation in which USAID most frequently invests. While detailed planning for Performance Evaluations is not as critical at the M&E Plan stage as it is for an Impact Evaluation that will parallel implementation, it is useful for an M&E Plan consider the types of questions, timing, evaluation teams, evaluation designs and data collection and analysis methods that are likely to be required over the life of a project. This helps ensure that a project's M&E budget is adequate for both anticipated evaluations and unplanned evaluations that may turn out to be required. It also serves as a useful way to cross-check the adequacy of plans for monitoring a project's performance, as some data that will be needed to support evaluations may not have been thought of when the monitoring component of a Project M&E Plan was drafted. Three sets of choices about Performance Evaluations are particularly helpful to consider, at least on a preliminary basis, namely: evaluation questions and timing; evaluation staff and evaluation designs and methods.

Evaluation Questions and Timing

For projects developed in line with an existing CDCS, or while a CDCS is being prepared, the topic of evaluation questions will not be new. As described on the kit page on Anticipating Evaluation Needs, evaluation questions identified in a CDCS will often be answered by project-level evaluations. In a parallel fashion, USAID expects that Project Evaluation Questions will be included in a Project Appraisal Document's (PAD) narrative, or in a table similar to the kit's Project Evaluation Questions Worksheet, which Missions might include in either the PAD or the project's M&E Plan. An M&E Plan would also be likely to include a narrative description of anticipated evaluations.

Evaluations carried out during implementation, and the questions they address, are sometimes categorized as being formative, as the focus is on improving an ongoing intervention. While performance evaluations carried out over a fixed period of time midway through a project is the most typical approach in a formative evaluation, other possibilities exist. Developmental evaluation, action research and operations research, all of which are carried out over several cycles during project implementation also qualify as formative evaluations. Summative evaluations, in comparison, generally focus on the efficacy of a program or project that is approaching its end or has been completed. The focus and duration of an evaluation, as well as the number of questions to be addressed, all have implications for a project's M&E budget.

USAID Staff Encouraged to Serve on Evaluation Teams

An evaluation team may be predominantly composed of USAID staff. However, an outside expert with appropriate skills and experience will be recruited to lead the team, mitigating the potential for conflict of interest.

USAID Policy Fosters the Involvement of Partner Country Evaluators

To the extent possible, evaluation specialists with appropriate expertise from partner countries, but not involved in project implementation, will lead and/or be included in evaluation teams.

Evaluation Staffing

USAID's evaluation policy, which expects that 3% of the program budget managed by an operating unit will be set aside for external evaluations, including both Performance Evaluations and Impact Evaluations, requires that the evaluation Team Leader be an independent expert from outside the Agency, who has no fiduciary relationship with the implementing partner or partners for the project or program to be evaluated. USAID also expects, pursuant to ADS 203, that one member of every evaluation team will be an evaluation specialist.

Beyond this, USAID's evaluation policy encourages USAID staff as well as evaluators from partner countries to serve as members of evaluation teams. More generally, USAID guidance and experience indicates that, on occasion, USAID may elect to undertake an evaluation on a joint basis, together with its country partner or other donors. Evaluations of this type require close coordination at a number of points, and may require that both USAID staff and the evaluation team dedicate more time to this type of effort than might be expected for other evaluations. Similarly, when USAID elects to undertake a Participatory Evaluation, in which beneficiaries play a more active role, additional evaluators and USAID staff time may be required to facilitate this process. Decisions about team composition for mid-term and final evaluations have M&E budget implications that are worth considering when the evaluation component of a Project M&E Plan is developed.

Evaluation Designs and Methods

In addition to identifying evaluations that may been needed over the life of a project, USAID's Project Design Guidance expects that a PAD, or the Project M&E Plan that supports it, will include suggestions for appropriate methods for any external evaluations these documents identify. This expectation parallels a similar requirement at the program level for the identification of evaluation designs and methods in a CDCS or accompanying PMP, as the sample table in the kit's page on Summarizing a PMP Evaluation Plan illustrates. Focusing on evaluation design and methods as a Project M&E Plan is developed is useful not only for developing an adequate M&E budget, but also for defining the most appropriate, and replicable methods for collecting baseline and performance data on measures for which endline information will be needed to support the learning and accountability purposes of a final project evaluation.

Unlike Impact Evaluations, where the range of experimental and quasi-experimental evaluation designs for examining and making inferences about causality is reasonably well defined, Performance Evaluations cannot always be described in terms of a specific design, or overarching evaluation architecture. This stems primarily from differences in the kinds of questions these two types of evaluations address. Impact evaluations concentrate of questions causality with respect to a particular intervention or set of interventional, all of which can be handled within a single design framework. One reason for this difference is that Performance Evaluations often address a heterogeneous list of questions, each of which requires a unique methodological approach. The second is that evaluation designs, even non-experimental evaluation designs, tend to be defined and categorized in terms of how well they address questions about causality. A table at the bottom of this kit page provides a quick Overview of Non-Experimental Evaluation Designs and Causal Inference.

Performance evaluations, like Impact Evaluations, draw on a wide range of data collection and analysis methods. An extensive list of reference documents and links on this page provides useful information on many of the techniques that are useful for Performance Evaluations, including methods for examining cost-effectiveness when Performance Evaluations raise this type of question. It may also be useful to review earlier kit pages on Data Sources and Collection Methods and Data Analysis when preparing a Project M&E Plan. For Performance Evaluation that are likely to involve a heterogeneous set of questions -- performance, cost-effectiveness, sustainability – and thus a variety of methods, users may find the Getting to Answers Matrix included in this kit as an evaluation Statement of Work and detailed design aid to be useful for making early notes on the kinds of methods that might be appropriate for specific evaluations discussed in a Project M&E Plan.

Missions that are developing Project M&E Plans for trade project evaluations will find that only a few of the methods described in the references provided in this kit were described as being used in most earlier USAID trade Performance Evaluations, namely document reviews and interviews. This finding was one of several in USAID's From Aid to Trade evaluation of its trade capacity building portfolio that suggested areas for improvement. Other findings that may still be relevant for those planning future trade project evaluations were that baseline data were often missing, thus making “before and after” comparisons impossible, and that few USAID trade project evaluations undertaken prior to the issuance of USAID's new Evaluation Policy included copies of the interview or other data collection instruments that were used. Those wishing to examine these older USAID trade evaluations for data collection and analysis ideas will find a link to a listing of trade evaluations through 2010 on this page.

When addressing USAID's evaluation questions, all Performance Evaluations are expected to examine gender differential access to, and utilization of benefits from USAID interventions through gender sensitive questions as well as by gathering and examining data on a gender disaggregated basis. An earlier kit page on Data Disaggregation provides access to multiple documents and links that may be useful in this regard. Since USAID often conducts evaluations in difficult environments, references on this page also lead to information on conflict sensitive monitoring and evaluation approaches. Finally, recognizing that Performance Evaluations will from time to time be undertaken for older projects or programs for which no baseline data were collected, this page also provides a useful guide to reconstructing baseline data where none exists.

While USAID policies stress the importance of sustainability, techniques for evaluating sustainability of project services and benefits are not as well developed as are the tools for assessing effectiveness, causality and efficiency in evaluations. To address questions about the sustainability of organizations in which USAID invests, USAID's NGO Sustainability Index provides useful tips for evaluators on factors that may be important in many types of organizations. In addition, a new Sustainability Evaluation Checklist and the Canadian Sustainability Report both provide ideas about techniques and indicators. Generally speaking, performance evaluations, unless they are carried out on an ex-post basis after USAID funding terminates, may be limited to assessing sustainability plans and action, and prospects for sustainability.