How To Foster Local Ownership for Improved Development Results: Collaborate, Learn, Adapt

Jan 24, 2020 by Kristin Lindell Comments (0)
COMMUNITY CONTRIBUTION

Imagine you’re tasked with designing a program that improves public service delivery to local communities in Uganda. You know that you need to engage local stakeholders for this program to be effective—without government buy-in and engagement, for example, service delivery changes will not be sustained. How would you go about doing it? 

When faced with this challenge, one implementing partner designed a platform, UBridge, to facilitate increased dialogue between local communities and their local leaders. By reporting and responding to incidents of compromised service delivery, community members and the government were empowered to solve their own problems. As a result, the government constructed boreholes for better water access, improved roads and mended bridges for access to markets, and added more classrooms to decrease crowding at schools. 

This offers an example of how local ownership can facilitate and sustain development results, but actually achieving local ownership over development efforts remains challenging, and raises an important question for international development donors and practitioners: how can we use our roles to intentionally and systematically foster local engagement and ownership? A recent review of the evidence on collaborating, learning, and adapting (CLA) and local ownership offers some answers: 

  • Local engagement leads to local ownership and, ultimately, improved development outcomes. Evidence from an in-depth case study on the Ebola crisis in Liberia, an in-depth case study on Community-led Total Sanitation in Zambia, and several CLA case stories demonstrates that when local stakeholders are engaged in defining development challenges and solutions via program activities, the results are more relevant to local needs and opportunities and are more effective than traditional donor-led approaches. This greater relevance and effectiveness in turn increases local stakeholders’ commitment to and engagement in identifying sustainable solutions to community challenges.
  • Bottom-up approaches contribute to better development results. A recent study analyzing about 10,000 development projects found that aid agencies achieve better results when using bottom-up approaches that empower frontline workers and organizations to make decisions based on their local knowledge and relationships. 

We have also always wondered what enables local ownership. Our synthesis of the evidence revealed these enablers:

  • Donor flexibility:  An analysis of seven case studies of development initiatives conducted by the Overseas Development Institute (ODI) found that features of the donor agency environment, such as flexibility and transparency, were significant in facilitating the success of politically smart, locally-led development initiatives.
  • Leadership support: In both of the cases explored in our in-depth case studies, active participation from a diverse range of leaders contributed to the overall success of the interventions.
  • Openness: None of the examples mentioned in the briefer would have been successful without donors and implementers actively convening local stakeholders and co-creating solutions with them.

Likewise, what are some of the barriers to local ownership we observed in the evidence?

  • Limited time for staff to pause and reflect on how to make improvements in the intervention.
  • Coordination challenges as the scope, scale, and speed of the Ebola crisis created a chaotic environment and made coordinating response efforts challenging.
  • Distrust and resistance to government and outsiders due to one country’s recent history of civil war, which left many communities distrustful of government authorities and suspicious of messages related to the program.
  • Sustainability challenges as one national government has been unable or unwilling to continue devoting resources to a Community-led Total Sanitation program after donor funding ended.

You might be wondering, “What CLA approaches can I use to increase local ownership and improve development results?” Try some of these: 

  • Identify which actors, including from among local stakeholders and the private sector, are most critical to achieving shared objectives. 
  • Facilitate conversations among those critical stakeholders to identify shared interests, co-create programming, and develop stronger relationships.
  • Generate and use evidence from, and in collaboration with, stakeholders that would be most useful to decision-making while also working with partners to strengthen their capacity to generate, share, and use evidence.

And lastly, don’t forget to tell us about your experience using CLA approaches to foster local ownership by leaving your comments below.

COMMENTS (0)

What is the Work?

Jan 21, 2020 by David Jacobstein Comments (0)
COMMUNITY CONTRIBUTION

In promoting international development, what is the actual work we do?  

How is that work understood?  

Do our words truly reflect the work we want to see? 

Could a shift in this language help us to achieve meaningful changes in our work?   

Recently, at the Global Partnership for Social Accountability’s annual forum, I was part of a unique discussion around a gap practitioners see between the work they do and the way their work is understood. Specifically, the organizations advancing this work in developing countries tended to see their role as fostering trust and social capital, and using this capital to build collaborative approaches to improving the public delivery of goods and services. To be sure, they often pressed for greater follow-through on public commitments, but this idea of “helping government deliver by building connections with constituents” is very different than the logic embedded in the original idea of social accountability. By its nature, this is seen as how we donors can help people hold government to account. They also viewed their work in communities as part of a longer-term effort connected with changes at the national level, rather than a purely community engagement. A more accurate label might be “collaborative engagement for delivery” (it resonates with this presentation by Rick O’Sullivan). 

However, this is not the rationale against which many donors have funded social accountability through the years. Donors in large measure have funded social accountability organizations to “hold government to account” for service delivery while also working on service availability and quality. They see these organizations as ensuring that service providers behave in ways that donor plans and investments expect them to, and punishing them if they don’t. That gap is now creating a challenge for social accountability organizations, as some robust empirical research is finding that social accountability programming can make a critical difference, with other studies finding null results from it. As discussed elsewhere, what research is finding seems to depend on whether the research is building from the strategies taken by change agents, or the research assumes the “catch and sanction malfeasance” logic that is often behind donor funding.

This led to a soul-searching discussion over how much to confront donors with the reality of what their work is, versus continued acceptance of the terms on which they receive funding. This includes whether donors are ready to move on to investing in research that builds theory around the strategies of local change agents, rather than testing theory derived in journals in the global north.

This led me to think how important the language we use to describe the work of development can be to how we design and manage development programming. In particular, for complex social change efforts, it seems to me that better ways of describing work can link with better efforts at measurement and improved performance. 

Why is this? Well, it turns out that the language we use to describe work is internalized into the logic and measurement of a program, even if we all understand that other considerations exist (an old idea, it turns out). 

For example, for many years, donors working to address recurrent crises acknowledged the inevitable return of certain disasters and the need to strengthen the ability of communities to cope outside of the confines of a disaster-response paradigm. However, I would argue that it is really only after the donor community started to define “resilience” as an outcome of interest that we could change our collective behavior. Suddenly, we weren’t just acknowledging the need to respond to repeated shocks, but could put programming against an idea (and slowly figure out how to measure the right results) to achieve that outcome. The shift from a consideration in disaster programming to an area in its own right came first in language and spread as this allowed the concept to anchor programming, including reporting and monitoring. A similar trend happened in the evolution from agricultural extension to value chains to market systems, where the new frameworks or ideas empowered different types of programming to be undertaken and measured. And a similar challenge confronts social accountability now.

I was re-reading Dan Honig’s seminal book Navigation by Judgment on a long flight recently, and this flags a second way in which our language defining the work we are doing matters. For those not familiar with it, Dr. Honig’s research shows that for certain types of development challenges (generally, those that involve interaction with complex social change - so most of them) it is more effective to navigate by judgment rather than by top-down accountability to predetermined metrics., In other words, delegate decisions to frontline agents close to the action who have the best information and tacit knowledge to make course corrections. And again, I think the language we use to define our work (for example, in accountability) and the intermediate outcomes we aim to reach can play a huge role in structuring whether judgment is implicit to achieving those results. 

For example, if you are seeking to improve health service quality, it is more necessary to the work to have someone close to the community in question making adjustments. This is because an idea like health service quality depends on the perception of citizens, and cannot be assured by delivering objectively countable items such as particular drugs or kits. It focuses efforts at learning and adapting programming down to the local level, rather than learning above the project and simply directing it to do what is required. A similar evolution holds within the education sector, where an emphasis on the countable (teacher and student presence) has given way to an emphasis on outcomes requiring more judgment to reach (learning performance), as described by Lant Pritchett in Schoolin Ain’t Learnin

If you accept my argument that the language we use to describe our work matters because it bakes in both our reliance on judgment and tacit knowledge, and our ability to program against “correct” intermediate results, what are the implications? There are at least three that I think are important for development work across a number of sectors:

  • In terms of theories of change and how we learn from evaluations and research, we need to shift the mindset from a theory testing to a theory building approach. This implies a very high value to rigorous empirical work, including experiments, but only those oriented toward discovering mechanisms and variables that matter on the path from inputs to impact - in particular, finding those intermediate outcomes that can become the next “resilience” or “health quality” and what might go into them. It shifts the key question in an impact or performance evaluation from “did an intervention work?” to “why did the intervention sometimes work better?” Part of the process is also to get beyond use of adaptability or flexibility and towards articulation of right outcomes, so that our adaptive implementation is well grounded. These can be both realist evaluations or part of RCTs; the distinguishing factor is the framing behind the learning rather than the technique. Some donors are taking up the torch with an emphasis on middle-range theory building. However, more can be done.
  • Our ability to define and assess meaningful intermediate outcomes is essential. In the education example above, the availability of good cross-country data on learning outcomes has made it possible to anchor work more directly to learning, rather than substituting education access as the outcome of interest with a variety of caveats. In my own democracy, rights, and governance sector, I see huge potential around ways to measure different forms of social capital or the strength of certain norms related to bedrock impacts like state legitimacy or political competition. Theory building (rather than testing) requires sharing and discussion of anchoring intermediate outcomes and data sources that allow programming to improve, and for us to seek to understand our work differently. Investment into measurement of intermediate outcomes seems to be a valuable public good that donors can help to generate. This will then open space for new language and new approaches toward our intended impacts.

If there is one idea that I think offers the greatest opportunity in terms of improving our language, it is to better incorporate the idea and language of probability into our description of “what is the work.” 

For a lot of our programming, we are trying to position important reforms to have a greater chance of succeeding - not only on paper, but in reality. This can range from changes in public financial management rules, to improved protection of certain key rights, to ending fertilizer subsidies, or to task shifting for health care. Yet all too often, our desire to present certainty pushes us to define these areas of work in simple steps, focusing on the visible progress of new laws or policies, or specific numbers of people trained or engaged, with theories of change to fit, leading to a focus on form not function.  

When we're trying to do big things, we have to accept that even the BEST intervention might not work, or at least achieve major impacts in a short time horizon. For example, failing to quickly, make a justice system more inclusive does NOT mean it wasn't money well invested, particularly if we may have positioned actors or seeded ideas in ways that make inclusive justice more likely over the next five years after our project ends. The idea that we should gauge progress similarly in building roads and building justice systems make no sense, but without a language to capture the difference, it remains impossible. 

I believe we would open huge space for our staff and partners if we started to change the language of these programs from a stepwise progression of linear change (pass law/pass implementing legislation/adopt policy/train workers/implement) to results of “make it 20 percent more likely that a budget will be shared publicly” or “have 25 percent more initial TB screening delivered by community health workers rather than hospitals.”  This change would allow us to focus on discovering how to make this happen, rather than always pursuing the easy first steps on paper. 

Perhaps a program to improve access to justice would start to operate in the realm of promoting certain norms, rather than direct training or outreach. Perhaps a program to address natural resource management would focus on intermediate markets and the incentives they create more than direct community engagement. 

Our programming space would be less limited by sub-sector and more defined by local knowledge of the place we’re working in. It would also transform our accountability for our portfolios from numbers of countable outputs, to defensible claims of progress. This would include more emphasis on our attention to the context and the integrity of our engagement and learning, consistent with our enterprise risk statement and with systems thinking. Assessing the probability of transformational change, rather than tracking specific steps of incremental change, offers a different language more appropriate to the realities of ambitious programming.

Next time you sit down to write a project or activity description, take a moment to think about how you are describing the work to be done. Keep your topline objectives the same, but see if you can change the language you use to describe the work to get there - build in a different intermediate outcome, or use a change in probability of the larger impact happening. The more projects and scopes designers can be both honest and clear about “what is the work,” the better we collectively can do that work.

 

Reflecting on Five Years of the CLA Case Competition

Jan 9, 2020 by Monica Matts Comments (0)

This fall, USAID’s Bureau for Policy, Planning and Learning and the LEARN contract announced the winners of our annual CLA case competition. This was the fifth year of collecting and judging case stories, and, over the years, we have learned a lot about Collaborating, Learning and Adapting (CLA) in practice. We have also learned about running an activity like this one and have found that it has had many more benefits than we had initially imagined. In this blog post, we’ll share some of the trends we’ve seen, the lessons we’ve learned, and challenges we’ve experienced in managing the CLA case competition.

Trends in submissions. Over the past five years, we have received 449 total submissions. That’s pretty amazing, considering that in our first year, our goal was 25 submissions. As the graph below shows, the number of submissions trended upward for three years, until we reached a peak of 127 submissions in 2018. Last year, the number of submissions dropped to 97.

 

We have been pleased to see a good deal of diversity among the organizations submitting cases. Over the years, about one-third of the submissions have come from USAID Missions or Washington Bureaus, with an increasing number of those being joint submissions, where a USAID Mission and implementing partner prepare a case together. While most of the remaining two-thirds are submitted by USAID implementing partners, we also have a few submissions for work funded by other donors, like this one.

There is also diversity in the topics covered by the case submissions. While cases covering activities and organizations in Africa are the most frequent (over half of each year’s cases), the case competition database includes cases from all regions. The graphic below shows a cumulative distribution of cases across the map.

 

Additionally, the cases come from a variety of technical sectors. Agriculture and health are the most frequently represented, but there are a number of case submissions from all major sectors, including education, water and sanitation, democracy and governance, and economic growth, among others. Each year, we also see a number of cases that focus on operational issues, like this winner from the 2019 competition; that’s great because those cases often have a lot to say about the Enabling Conditions components of the CLA framework.

What we’ve learned about CLA. We started the competition in 2015 because we surmised that many USAID staff and partners were already practicing CLA, but we didn’t have a channel for hearing about it, or they weren’t calling their practices “CLA.” Submissions to the case competition confirmed that CLA was being practiced in a wide variety of contexts and programs, as we had imagined. The case competition also seemed to generate more interest in and a better understanding of CLA as the competition went on, as evidenced by the growing number of submissions and the increasing level of sophistication of the practices they described.

Several components of the CLA framework are more prevalent than others. In the cases submitted to the case competition, Collaboration and Adaptive Management practices were, by far, the most frequently described. Enabling Conditions components were included less frequently in the case submissions. Cases cited the Mission Resources, Scenario Planning, and Institutional Memory subcomponents the least. Does this mean that practices and approaches to these components are, in fact, practiced less frequently? Is less attention paid to them? Or, does it mean that enabling conditions are more difficult to write about? We haven’t explored the answers to these questions, but would welcome your thoughts or comments below.

USAID and LEARN have explored a number of other questions through two extensive analyses of the cases submitted through the competition. These analyses (find links to the full papers and briefers on this page) looked for patterns around how CLA practices and approaches contributed to organizational change and development outcomes. The studies’ findings included:

  • Local engagement leads to improved outcomes;
  • Feedback loops increase likelihood of evidence-based decision-making; and
  • Pause and reflect leads to better outcomes.

For more information about these and the other findings and implications, see the analysis from 2015 and 2018.

What we’ve learned about managing a competition. In the spirit of CLA, we continued to iterate on the process of launching and managing a competition over the years, in order to improve the utility of the cases and the integrity of the judging process. For example, we updated the entry form several times to clarify for submitters what judges were looking for and to emphasize the CLA aspects of the case (as opposed to the technical story). Over the years, we looked to better understand the implementation of Agency priorities through the case submissions. One year, for example, we requested examples of evaluation utilization, and last year, we added a question on how CLA supported self-reliance.  We also adapted the judging process, going from judging based on a set of parameters to assigning points to each response on the submission form.

There were a number of benefits to hosting a case competition, some of which we anticipated when we launched it five years ago, and some of which came as a surprise. As mentioned earlier, we had hoped to learn about the types of CLA practices and approaches that USAID staff and partners were using in their work and the variations in practice based on location, technical sector, and organization--and we did! We also hoped that the attention to a competition would help to promote increased understanding of CLA. Attention around the competition has grown. This past year, USAID Administrator Mark Green announced the winners in an announcement to USAID.

We have continued to realize the value of having an accumulation of case stories; the case competition database has become one or our most valuable resources, which we hadn’t anticipated. In addition to providing information to support the evidence base for CLA, USAID has drawn on cases from the database in reporting to Congress and other external bodies and in providing stories for events and written products. The value in having ready access to hundreds of examples of how development organizations are collaborating, learning and adapting can’t be overstated--we reference the database and stories all the time!

Finally, we have learned that managing a case competition takes time and resources. Resources, mainly in the form of staff time, are needed to promote the competition, do an initial review of submissions, judge the cases, reach out to winners and edit and post the cases to USAID Learning Lab. Running an effective competition meant ensuring that we had devoted enough staff time so that it could be done well.

Opinion: 3 Ways You Can Get the Candidates You Really Need

Jan 5, 2020 by Monalisa Salib Comments (1)
COMMUNITY CONTRIBUTION

This opinion piece was originally published on Devex.com on January 2, 2020. It was cross-posted with permission. 

With a greater focus on interdisciplinary programming and constantly shifting contexts in international development, we need to hire staff who can make the most of complex situations to achieve meaningful results. Put another way, we need to prioritize hiring employees with an adaptive skill set, regardless of technical sector or geographic expertise. When we don’t, our implementation suffers and our ability to achieve meaningful results is compromised.

To put it simply, adaptive employees are “individuals, regardless of title, that in collaboration with relevant stakeholders, systematically acquire and use knowledge to make decisions and adjustments in their work in order to achieve greater impact.” Perhaps the most important piece of this definition is the focus on impact: adaptive employees stay focused on achieving meaningful results and this “North Star” guides their decisions. A program manager in this situation won’t just “check the box” on a quarterly review of a program — they will stay focused on achieving programmatic outcomes and use the review to figure out what is working, what isn’t, and how to improve results.

Driven by a sense of curiosity and commitment, these individuals aren’t satisfied with the status quo. Lastly, they don’t assume they have all the answers or that their experience is all that matters. As they navigate inevitable changes, they remain humble, aware of all that they don’t yet know, and value relationships that are critical to achieving success.

You can read the rest of this piece on Devex.com

Qualitative Comparative Analysis in Case Management Systems: Worker Support Is a Predictor of Better Outcomes

Jan 2, 2020 by Zulfiya Charyeva Comments (0)
COMMUNITY CONTRIBUTION

For the past eight years, I have focused on research and evaluation of programs that provide services to orphans and vulnerable children (OVC) and their families. These programs provide services to improve overall health and well-being of their beneficiaries. Almost by definition, these kinds of programs are multifaceted and causal pathways are hard to determine.

As with many other programs worldwide, OVC programs rely on the local workforce, such as community health workers, case workers, and health volunteers. What I found perplexing is why certain case workers achieve better results than others. Obviously, personal characteristics can partially explain differences. However, I am more interested in understanding differences at the level of organizations: specifically, what can organizations do to better equip their workers to improve beneficiary outcomes? I chose to answer this question by mapping causal pathways for a complex intervention—a solution that might work for other programs, too.

My project, MEASURE Evaluation, funded by the United States Agency for International Development (USAID) and the United States President’s Emergency Plan for AIDS Relief (PEPFAR), conducted a study of COVida—a USAID-funded, national OVC program in Mozambique. COVida supports roughly 300,000 OVC and their caregivers each year to access high-quality services. We knew that some community-based organizations (CBOs) in the COVida program had better HIV outcomes than others. How, we wondered, could COVida best influence those factors so that other CBOs also would enjoy improved performance? We also wanted to produce evidence-informed, actionable recommendations for programs and donors in Mozambique on how to shift their strategies and, ultimately, their resource allocations, to optimally balance service quality and cost.

Our objective was to identify what combinations of modifiable case management attributes would lead to a defined outcome: improved knowledge of HIV status among beneficiaries. Incorporating collaborating, learning, and adapting (CLA) into the study, we worked with USAID and COVida to define the study research questions and methods. We chose to use qualitative comparative analysis (QCA) to unpack the complex effects of modifiable attributes of case management—such as caseload, training of case workers, and supervision structure. We thought QCA would be a good choice because it would provide a richer contextual picture of what led to changes in knowledge of HIV status. Linear and logistic regression models that we have used before are adequate for identifying one factor at a time, but less effective for our interest in how factors combine to create an effect—in other words, when and where do different combinations of modifiable factors become important in reaching key outcomes. We chose six CBOs in three provinces and analyzed data from surveys with 70 case workers and their supervisors, plus COVida routine data.

Here’s what we learned:

1. Numerous factors improve knowledge of HIV status. No single attribute can explain changes in our outcome of interest. Instead, it is a combination of factors.

2. A few factors are chief contributors to the outcome:

  • Case worker experience or training
  • Case worker support through high-quality team meetings, one-on-one supervision, and a low number of case workers per supervisor
  • An appropriate number of cases per case worker and fewer complex cases (i.e., HIV-positive beneficiaries)
  • Transportation and phone access for case workers

We found six “pathways” that led to a change in the outcome of interest—each pathway with a different combination of case management attributes that contributed to the change. In identifying these pathways, the study showed ways to employ CLA to change processes to improve case management. All paths shared one condition: how cases are assigned. A formal process that considered complexity, caseload, experience, skills, and proximity in assigning cases was at least a partial factor in every instance of a high percentage change in HIV status knowledge.

3. Alternative paths can achieve the outcome. Figure 1, below, illustrates the six pathways we identified. Each pathway is independent of the others and each one leads to an increase in HIV status knowledge—but all of them are examples of employing CLA to learn from experience. Depending on the available resources, a CBO may focus on any one of the six.

For example, if a CBO cannot recruit experienced case workers—attributes in four of the six pathways—it can pursue one of the other two pathways. One path could be to address case worker retention and improving the quality of team meetings, for example. Or, an alternate path could be to assign fewer complex cases to case workers, reimbursing for out-of-pocket expenses, and allowing more time per case.  

Figure 1. Pathways to improve HIV status knowledge

Chart describing pathways to improving HIV status knowledge

In summary, in order to improve knowledge of HIV status, we recommend that programs embrace CLA and try these tactics for improved performance:

  • Implement a formal process to assign cases, considering case complexity and existing caseload to reduce overwork.
  • Provide case workers with at least two types of support, such as weekly care team meetings, weekly supervisor meetings to address challenges and develop case management plans or establishing low supervision ratios so that managers are more available.
  • Hire experienced case workers.
  • Provide all case workers with follow-up training so that they have the tools to address challenging cases.
  • Expand the financial resources offered to case workers, such as increasing stipends, implementing bonuses, and reimbursing them for work-related expenses.

As part of the CLA process, COVida adapted its programming by providing new instructions to CBOs and much closer supervision of the process of assigning cases. The updated process considers caseload, work experience, skills, case complexity, and worker proximity to a case. The process is meant to prepare case workers for effective case management and to ensure they are not overburdened and that they have enough time to address the needs of each beneficiary.

Prior to the QCA study COVida had already introduced changes to use monitoring tools managed by case workers themselves, upgraded the chief case worker qualifications for management, and a bonus system to incentivize performance, among other changes.

Our findings from the QCA study strengthened the project’s arguments for having introduced these changes and provided evidence that making these changes was the right path.

For more information, see the full study report and brief and access a recorded webinar.

Zulfiya Charyeva, PhD, is a senior technical advisor at MEASURE Evaluation, Palladium.

 

What is the Research-Implementation Gap and Why is it Relevant to USAID Programming?

Dec 31, 2019 by Natalie Dubois, Andres Gomez, Sara Carlson, Diane Russell Comments (0)

A gap in the forest. Sugodi, Barangay Cabayugan, Palawan, Philippines. Photograph by Jason Houston for USAID

 

Natalie Dubois, Andres Gomez, and Sara Carlson currently support Measuring Impact II, which offers assistance for best practices in adaptive management and the use of evidence and learning across USAID’s biodiversity portfolio. 

 

The research-implementation gap, also referred to as the knowing-doing gap and the knowledge-action gap, captures the idea that there is a disconnect between the knowledge generated by researchers and the information being used to inform policy and practice decisions. This gap impedes effective programming when program planning and implementation proceed with incomplete information and when managers miss opportunities to incorporate relevant knowledge into program decisions. This hot topic in applied research literature has real-world implications for USAID programs.

 

Researchers have been discussing what they can do—and are doing—differently to make their work more relevant and accessible to practitioners. As practitioners, we have been focused on the implementation side of that gap and how our approach to programming can help or hinder evidence-based decision-making. In a new paper in the journal Conservation Science and Practice, we expand the dialogue around the research-implementation gap to make explicit that bridging the gap is a shared responsibility between practitioners and researchers. Although our recommendations are directed toward conservation practitioners, they are applicable to the work of practitioners across all sectors at USAID. Practitioners across the Agency can apply the learning processes they already use to narrow the research-implementation gap.

 

At USAID

The Program Cycle offers approaches such as collaborating, learning, and adapting (CLA) and multiple entry points, including evaluation, to work on the implementation side of the research-implementation gap. These approaches and tools can be strengthened through evidence-based decision-making and adaptive management. Evidence-based decision-making focuses on acquiring evidence before a design or implementation decision to better understand what will likely work—or not work. Through adaptive management, practitioners learn from outcomes after these decisions have been implemented. Being explicit about how we apply these two forms of learning to decision-making in the Program Cycle has important implications for the research-implementation gap.

 

Five recommendations

If the research-implementation gap is a shared responsibility, then how might practitioners at USAID help address it through their work? Below we reflect on and interpret the five recommendations from the paper.

 

  1. Share your questions. It may seem obvious, but evidence that does not exist or is not relevant to decision makers cannot be used to inform decisions—so knowledge exchange between researchers and practitioners is particularly important. Researchers can do this is by involving end-users in the process of science production, but most researchers will welcome practitioner input about their evidence needs. Synthesizing and disseminating critically important research themes and questions can be an efficient way for practitioners to communicate their needs with the research community (e.g., USAID’s Biodiversity and Development Research Agenda) and open avenues to new partnerships with researchers. At the project and activity level, articulating well-defined questions for researchers when commissioning assessments and evaluations can increase their relevance to program decisions.

     

  1. Share your data. Monitoring and evaluation, adaptive management, and organizational learning all have the potential to generate information about performance and effectiveness that can feed into the research arena. However, sharing data also requires investment in infrastructure and systems to collect and catalog data and make it available in useful formats so it can be used in formal research projects. Practitioners’ compliance with USAID’s Open Data Policy ensures that researchers have access to program data that can then be analyzed to generate evidence on effectiveness

     

  1. Help build the evidence base. Project implementation offers opportunities for learning that can improve global practice and scientific knowledge. However, making data accessible to researchers does little to build the evidence base if the data being generated are of insufficient quality to make reliable inferences about effectiveness. Practitioners can help by generating data that produce transferable knowledge that extends beyond simply assessing the success of a project in meeting its goals. For example, teams can make use of program learning agendas to test theories of change about how strategic approaches work. 

     

  1. Apply multiple learning strategies. Evidence-based practice and adaptive management are not alternative frameworks. Systematic use of scoping and assessments in design can reveal where it may be more efficient to invest resources in learning from the evidence base versus taking action first and learning from project outcomes. Learning from implementation and sharing that learning widely can help inform future similar programming decisions. Even within a single project or activity, different approaches can be more or less suited to different information needs.

     

  1. Be aware of how your choices can perpetuate the gap.  When program managers and design teams are faced with time and resource constraints, these pressures can push them toward learning from outcomes as the default, rather than building on existing evidence. However, when there is an existing research-implementation gap, waiting to learn from outcomes can result in practitioners wasting resources learning from mistakes that could have been avoided. Simply being transparent about how evidence and learning are being used to address uncertainty can help practitioners increase the efficiency of investments in evidence and learning. 

     

Viewing the research-implementation gap as the shared responsibility of both researchers and practitioners will expedite knowledge exchange at the research-implementation interface. Practitioners at USAID already have several tools at their disposal that can help them do so. And by paying greater attention to how these tools are used and applied in program decisions, practitioners can play a critical role in closing the gap between research and implementation. 

 

Reference:

 

Dubois, N.S., A. Gomez, S. Carlson, and D. Russell (2019). Bridging the research-implementation gap requires engagement from practitioners. Conservation Science and Practice e134. https://doi.org/10.1111/csp2.134

Filed Under: Working Smarter

How Can CLA Help on the Journey to Self-Reliance? An Interview with the HRH2030 Program

Nov 20, 2019 by Maria Castro Comments (0)
COMMUNITY CONTRIBUTION

We asked three questions to Juan Sebastián Barco and Katy Gorentz from the USAID Human Resources for Health in 2030 (HRH2030) program after their case study was named a winner in the 2019 USAID CLA Case Competition. During our conversation, they took a deeper dive on how their collaborative, evidence-driven work with the Colombian Family Welfare Institute (ICBF) is contributing to Colombia’s journey to self-reliance and how their approach is already replicated in other countries where HRH2030 works.

Here is what they had to say:

Can you give an example of instances when HRH2030 used CLA to help ICBF collaborate with other actors to improve practices and processes used for referral and follow-up processes in cases of child abuse? 

Juan Sebastián: HRH2030 is supporting the government’s cross-institutional social and health framework Ni Uno Mas, or Not One More, and is also collaborating with ICBF, the National Learning Service (SENA), and the Ministry of Health (when relevant) to develop a training platform and curricula for social and health workers to ensure adherence with childcare protocols and implement better case management practices with children and families. In addition, we are supporting institutional coordination efforts by developing process maps, clarifying referral processes, and establishing better communication processes with local communities that align with Ni Uno Mas to reduce the high child mortality rates associated with all types of violence. The goal of this framework is to improve collaboration among institutional stakeholders, improve social and health sector capacities with training in basic and technical skills, and generate a link with rural communities, including indigenous populations and communities formerly affected by prolonged conflict. Colombia’s first lady, ICBF, the Ministry of Health, the private sector, and local communities worked together to establish Ni Uno Mas

Katy: The process of collaborating with ICBF  to reflect on their practices and analyze the results allowed us to work with a variety of ICBF stakeholders, from national level officials to social workers in municipality-level protection teams, to prioritize the steps that are most tangible to make improvements to the quality of the services offered to children and families. To achieve this, we used the case management assessment tool to help the protection teams visualize what optimized case management would look like if it were running at the highest quality, reflect on the status of their own case management practices, and triage and prioritize how case management can be improved to reflect the lessons learned from the assessment. From there, we’ve supported ICBF’s work with SENA to see that these ideas are reflected in social work trainings. 

How is an evidence-based approach helping ICBF and how is it taking into account sustainability, for when they are no longer working with HRH2030? How is using this approach contributing to their journey to self-reliance? 

Juan Sebastián Barco: ICBF needs better tools and evidence to make decisions, and the government of Colombia wants evidence-based actions, big data, and risk prediction. Our models are an important step in this direction. We have contributed to ICBF’s push to align with the government’s priorities by incorporating assessments that shed light on the operational and technical obstacles. The assessments also increase ICBF’s agency and stake in the process as well as its leadership. 

Katy: The HRH2030 team heavily involved ICBF from the beginning. ICBF saw the value of the information we gathered through the assessments, and they valued transparency throughout the process. In practice, collaborating helped them get valuable information that they could use right away, as they were involved in the process and thus able to analyze actual results throughout. ICBF was able to latch onto HRH2030’s approach because they saw real value every step of the way, seeing the information they needed, the work that went into it, and what data they could get out of it. As a result, they were able to see that collaborating at multiple levels could get them valuable information to inform decision-making and strategic planning. ICBF was also involved in the design, which helped them understand that this was something tangible that they could use in the future – they’re already planning to expand use of these approaches to other regions of Colombia. 

The Colombia activity is one country activity of the global HRH2030 program. Are any of the tools you used in Colombia being used in other countries with other organizations interested in measuring their development? 

Juan Sebastián: Katy can answer this question. 

Katy: HRH2030 Capacity for Malaria Building (CBM) is working on organizational maturity, organization processes, planning, and strategic thinking with National Malaria Control Programs (NMCP) in highly-endemic malaria countries. CBM uses a maturity model assessment to understand and improve organizational process performance, which really resonated with our work with ICBF. So, we adapted the maturity model approach to ICBF’s needs based on our experience with CBM. In addition, the relational coordination assessment was something that came from the Colombia activity and went to over to CBM. We used the relational coordination assessment in Colombia, which builds a culture of effective internal collaboration, to understand communication strengths and breakdowns within ICBF at the national level. Country representatives and the CBM activity team in Chad expressed interest, which led them to incorporate this approach into their baseline assessment of NMCP capacity. CBM is now incorporating that tool for all CBM activities where they are conducting organizational capacity assessments. It is an especially useful tool because it helps gather evidence for planning on internal coordination and optimization, which is usually a hard concept to quantify.

Adapting to the changing landscape of social and health workforces in Colombia requires collaboration across sectors and rapidly assessing what works. Collaborating, learning and adapting has been pivotal to HRH2030’s partnership with ICBF and will continue to be a crucial framework as the country’s capacity to provide protective services to children and adolescents is challenged by the influx of migrants from Venezuela; which is testing the country’s capacity on the journey to self-reliance.

Juan Sebastián Barco is the director for the HRH2030 Colombia activity, and Katy Gorentz is the monitoring and evaluation manager for HRH2030. HRH2030 is a global project funded by USAID and implemented by Chemonics and a consortium of partners; the Colombia activity is supported by the American International Health Alliance.

How CLA Helped Increase Self-Reliance in Zambia

Nov 16, 2019 by Laura Ahearn Comments (0)
COMMUNITY CONTRIBUTION

CLTS Zambia

This blog post is a summary of a study entitled, “Deep Dive of Akros Community-Led Sanitation Program in Zambia.”

How can people be encouraged to take charge of their community’s sanitation needs and become self-reliant in latrine construction and use? Open defecation, which can lead to serious infectious diseases across the population and to stunting or even death in children, continues to be a problem in many countries. One approach to helping communities become Open Defecation Free (ODF) is Community-Led Total Sanitation (CLTS), which eschews the traditional subsidy approach in favor of building community self-reliance by motivating local residents to handle their own sanitation needs. In Zambia, the Ministry of Local Government and Housing (MLGH) embraced the CLTS approach in its Zambian Sanitation and Health Program (ZSHP) and set out to implement CLTS across the country’s rural districts with support from UNICEF and the UK’s Department for International Development (DFID). Along with Akros, a Lusaka-based implementer, ZSHP collaborated, learned, and adapted its approach to CLTS, ultimately making it more successful at ending open defecation and increasing self-reliance.

ZSHP’s CLTS approach was centered on training Community Champions to facilitate “triggerings,” which were two- to three-hour processes that included a “walk of shame” around the village to identify the locations where open defecation occurred. The village residents were encouraged to view open defecation not as an individual choice but instead as a problem that had serious health implications for all community members. Following the triggering, communities would usually decide to create a Sanitation Action Group, build their own latrines, set up hand-washing stations, and improve their overall waste management. No subsidies were provided for latrine construction under the CLTS approach, as it was designed to foster a sense of ownership and self-reliance among community members.

Akros, a Lusaka-based organization, came on as an implementer of ZSHP in 2014. To complement the CLTS approach, Akros developed a Mobile-to-Web (M2W) application that facilitated real-time monitoring of each community’s progress, speeding up the feedback loops between community members and government officials. After experiencing initial success, however, Akros realized that the improvement they had seen at first was not continuing. Akros decided to start collaborating with traditional leaders more intentionally and effectively, and once they did so, their success rates increased remarkably. 

While Akros employees did not explicitly set out to incorporate collaborating, learning, and adapting (CLA) into their programming, the M2W component of the CLTS program in Zambia nevertheless integrated the principles of CLA into its programming, leveraging collaboration among community members, government officials, and traditional leaders to yield information about the sanitation status of each community, which in turn enabled Community Champions and traditional leaders to provide additional resources and attention to areas advancing more slowly, resulting in faster and more durable progress to ODF status in the treatment communities.

The case analysis deep dive on how Akros and ZSHP incorporated CLA into their work is part of an effort to examine the evidence base for CLA, conducted by USAID LEARN in support of USAID’s Bureau for Policy, Planning and Learning. It is the second of two CLA case analysis deep dives, the first one focusing on Global Communities’ response efforts to the Ebola outbreak in Liberia. Both of these studies adapted methods from contribution analysis, contribution tracing, and outcome harvesting to analyze corroborating evidence and alternative explanations of the outcomes.

Zambia’s ZSHP integrated CLA in order to achieve sustainable sanitation outcomes in rural districts across the country in the following ways:

COLLABORATING: In collaboration with UNICEF and the Government of Zambia under the CLTS program, Akros worked closely with community members, especially Community Champions and Sanitation Action Groups, encouraging them to take on the responsibility of making their community ODF without any outside funds for latrines or hand-washing stations. Akros improved the efficacy of this approach by working closely with traditional leaders, government officials, schools, researchers, the media, and other community groups. 

LEARNING: In order to facilitate continuous learning about progress on the ground, Akros developed an M2W application that allowed for real-time monitoring and quick feedback loops. Akros also learned from a number of sources, including two evaluations, studies conducted by researchers (both internal and external to Akros), and staff members’ day-to-day experience.

ADAPTING: Akros incorporated innovative real-time monitoring into the CLTS program using its M2W app when data indicated a need for quicker and more accurate feedback loops. Akros also designed the “Chief App” to facilitate better utilization of the data by traditional leaders, who accessed the information using tablets and a simplified dashboard designed especially for them. Using the M2W data to track the performance of their headmen/women’s villages, the chiefs/chieftainesses adjusted their visits to villages, thereby using their scarce fuel more efficiently. 

So, what can we conclude from this? The case analysis deep dive yields a number of insights into the specific contributions CLA made to the effort and the results. It suggests that strategic collaborations with government officials, traditional leaders, and community members led to greater feelings of local ownership, self-reliance, and in many cases, effective behavior change. Enabled by donor flexibility, and strengthened by a broad range of leadership support and participation, CLA approaches in this case incorporated innovative digital monitoring using the M2W app that led to better quality data and speedier feedback loops. Chiefs/chieftainesses and headmen/women were also involved in ways that supported development outcomes, thereby demonstrating how traditional leaders can be constructive agents of change rather than anachronistic obstacles to development.

Selected Citations:

Boston University’s Center for Global Health and Development and Zambia Center for Applied Health Research. (2017) Impact Evaluation of the Sanitation and Hygiene Program in Zambia: Final Report. https://www.unicef.org/zambia/ZSHP_Impact_Evaluation_Report_2017.pdf

Kar, K. and Chambers, R. (2018). Handbook on Community-Led Total Sanitation. http://www.communityledtotalsanitation.org/sites/communityledtotalsanitation.org/files/cltshandbook.pdf

Yeboah-Antwi, K., et al. (2019). Improving Sanitation and Hygiene through Community-Led Total Sanitation: The Zambian Experience. The American Journal of Tropical Medicine and Hygiene (100)4: 1005–1012.

Zambia Ministry of Local Government, Housing, Early Education, and Environmental Protection and UNICEF. (2011) Community Led Total Sanitation: An Evaluation of Experiences and Approaches to Date. https://www.unicef.org/evaldatabase/files/2011_Zambia_-_ZAM_WASH_CLTS_Evaluation_Report_2011.pdf

Where to Go for Research Evidence

Nov 13, 2019 by Lily Sweikert Comments (0)
COMMUNITY CONTRIBUTION

Are you looking for research evidence to inform your development programming? You’re in luck. The Development Experience Clearinghouse (DEC), a free, publicly available, and searchable database, recently added a list of more than 15,700 USAID-funded and USAID-affiliated peer-reviewed research publications to its collection.

Maybe you work in global health and are interested in reading about USAID-supported research on the transmission of HIV, malaria, or tuberculosis. Or, perhaps you’re addressing wildlife trafficking and you want to learn about methods to engage the local community. Simply click the link above and type in your search terms. You could also search by country or region to see the research evidence produced through USAID support.

USAID is committed to increasing public access and usability of USAID-funded data and research evidence, in compliance with the Foundations of Evidence-based Policymaking Act and the Public Access Plan.

Now, the general public can more easily find USAID-funded research evidence to draw on to inform future development policies and programming. The addition of this collection to the DEC is an example of the Agency’s commitment to increasing public access and usability of USAID-funded data and research.

We are thrilled to be able to provide improved access to these USAID-supported research publications. The successful publication of these journal articles is a testament to the variety and quality of USAID-supported research, including locally-generated research, supporting many countries’ journeys to self-reliance.

Embracing Uncertainty: The Potential for 'Mindful' Development

Oct 27, 2019 by Guy Sharrock, Catholic Relief Services Comments (0)
COMMUNITY CONTRIBUTION

There is a growing awareness that many aspects of economic and social development are complex, unpredictable, and ultimately uncontrollable. Governments, non-governmental organizations, and international agencies have realized the need for a change in emphasis; a paradigm shift is taking place away from predominantly linear and reductionist models of change to approaches that signal a recognition of the indeterminate, dynamic and interconnected nature of social behavior.

Over the last few years many international NGOs have been adopting a more adaptive approach to project management often with reference to USAID’s ‘Collaborating, Learning and Adapting’ (CLA) framework and model. In the case of Catholic Relief Services this work builds on earlier and not unrelated capacity strengthening interventions – still ongoing – in which projects are encouraged to embed ‘evaluative thinking’ (ET) (Buckley et al., 2015) into their modus operandi.

Ellen Langer, in her excellent book The Power of Mindful Learning (Langer, 1997) introduces the notion of ‘mindfulness’. This concept, underpinned by many years of research, can be understood as being alert to novelty – intentionally “seeking surprise” (Guijt, 2008) – introducing in a helpful manner a sense of uncertainty to our thinking and thereby establishing a space for ‘psychologically safe’ learning (Edmondson, 2008) and an openness to multiple perspectives. This seems to me very applicable to the various strands of CLA and ET work in which I’ve been recently engaged; Langer’s arguments for mindful learning seem as applicable to international development as they are to her own sector of research interest, education. To coin the language of Lederach (2007), Langer seems to “demystify” the notion of mindfulness whilst at the same time offering us the chance to “remystify” the practice of development work that seeks to change behavior and support shifts in social norms. This is both essential and overdue for development interventions occurring in complex settings.

A mindful approach to development would seek to encourage greater awareness in the present of how different people on the receiving end of aid adapt (or not) their behavior in response to project interventions; in short, a willingness to go beyond our initial assumptions through a mindful acceptance that data bring not certainty but ambiguity. According to Langer, “in a mindful state, we implicitly recognize that no one perspective optimally explains a situation…we do not seek to select the one response that corresponds to the situation, but we recognize that there is more than one perspective on the information given and we choose from among these.” (op. cit..: 108). Mindful development encourages a learning climate in which uncertainty is embraced and stakeholders intentionally surface and value novelty, difference, context, and perspective to generate nuanced understandings of the outcome of project interventions. Uncertainty is the starting point for addressing complex challenges and a willingness to “spend more time not knowing” (Margaret Wheatley, quoted in Kania and Kramer, 2013) before deciding on course corrections if needed. As Kania and Kramer (ibid.: 7) remark, “Collective impact success favors those who embrace the uncertainty of the journey, even as they remain clear-eyed about their destination.”

References

Buckley, J., Archibald, T., Hargraves, M. and W.M. Trochim. (2015). ‘Defining and Teaching Evaluative Thinking: Insights from Research on Critical Thinking’. American Journal of Evaluation, pp. 1-14.

Edmondson, A. (2014). Building a Psychologically Safe Workplace. Retrieved from: https://www.youtube.com/watch?v=LhoLuui9gX8

Guijt, I. (2008). Seeking Surprise: Rethinking Monitoring for Collective Learning in Rural Resource Management. Published PhD thesis, Wageningen University, Wageningen, The Netherlands.

Kania, J. and M. Kramer. (2013) ‘Embracing Emergence: How Collective Impact Addresses Complexity’. Stanford Social Innovation Review. Stanford University, CA.

Langer, Ellen J. (1997). The Power of Mindful Learning. Perseus Books, Cambridge, MA.

Lederach, J.P., Neufeldt, R. and H. Culbertson. (2007). Reflective Peacebuilding. A Planning, Monitoring and Learning Toolkit. Joan B. Kroc Institute for International Peace Studies, University of Notre Dame, South Bend, IN, and Catholic Relief Services, Baltimore, MD.

Filed Under: Working Smarter

Pages

Subscribe to RSS - blogs