Reinventing the Rules

Discover the Latest Innovations and Lessons Learned in Rule of Law and Legal Empowerment Projects

Recommendations on Improving Evaluations in the Rule of Law Sector

Earlier this month, the Folke Bernadotte Academy in Sweden, issued a report on “Evaluations and Learning in Rule of Law Assistance,” The research report provides a comprehensive overview of the challenges to seeking and incorporating lessons learned in the rule of law sector. By looking at donor agencies in the US and Europe, it discusses how ‘disseminating and incorporating lessons learned into program development remains one of the more pressing issues that needs to be addressed by the rule of law community.’ Catch excerpts and recommendations below!

Credit: CGAP

Credit: CGAP

Several factors contribute to why constructive use of lessons learned is not keeping pace with the expansion of the rule of law field:

  • Differing understandings of what the rule of law entails;
  • Numerous rule of law actors operating in the field;
  • An uneven quality of the methodological approaches used to evaluate rule of law projects or programs;
  • Inadequately prepared and thought out evaluations produced on tight timelines;
  • Limited joint evaluations between donors and national counterparts;
  • Difficulty in accessing evaluation reports with valuable lessons learned;
  • Evaluation reports used to justify or discontinue on-going programming, but rarely to gain knowledge of what works or what does not work in the field; and
  • Lessons learned from problematic projects are often ignored or forgotten.

The Importance of Comparing Findings in the Evaluation of Projects

Although the evaluation of individual projects is important, the value of evaluation is substantially augmented when findings from one project are compared with another project. Comparison makes it easier to identify relationships between interventions and outcomes and what intervening variables may affect the outcome of the project. Because the understanding of the rule of law field is so fragmented, it is important to compare and contrast approaches and results. For a comparison to be meaningful, it needs to take into account the ‘the thick description’ of local arrangements and culture and specific characteristics that cannot easily be reduced to scores or numbers. Evaluations that systematically compare findings from multiple projects in various countries or settings are beneficial, although they remain rather unusual.

Improving the linkage between program objectives and activities 

A World Bank evaluation of three judicial reform projects in South America found that all three projects showed weak linkages between the objectives and the proposed activities. The evaluation pointed out that the most apparent illustration of this flaw related to the construction and refurbishing of courthouses and other legal institutions, which received considerable funds in the three projects. However, the program documents lacked any kind of discussion concerning the impact renovated courthouses might have had on the overarching program objectives – creating a more effective, accessible, and credible judicial system. Instead, the evaluations criticize the projects for assuming that the links between actions and objectives appeared almost intuitive. 

Changing the Perception that Rule of Law Programs are Hard to Evaluate 

Projects that have overly optimistic and vague programmatic objectives are difficult to evaluate. These unrealistic and vague objectives contribute to the perception that rule of law and democracy programs are difficult to evaluate. For example, an initiative supporting a constitutional review process where the project’s stated goal is to pass a new constitution is likely to fail as the review process might still be on-going when the project comes to an end and other confounding factors are likely to affect the outcome of the review process.

Attributing Programmatic Successes to Multiple Factors

Observable changes celebrated as programmatic successes are often difficult to attribute to a particular program. The importance of other factors that might have affected the overall goal is often diminished. For example, a review of the United States Agency for International Development’s (USAID) justice program in Latin America found a significant decline of certain human rights abuses. Although the change might have been initiated by USAID-funded projects, factors (e.g., the end of civil war, external political pressure, and programs funded by other donors) could also have contributed to this change. Apparently, rule of law evaluations often do not consider these types of alternative explanations when a desirable outcome is observed. A study reviewing a sample of 25 evaluations of the USAID’s democracy and governance (DG) programming found that only two evaluations carefully considered whether alternative explanations could have contributed to the observed results.

Evaluations Conducted by Central Evaluation Units vs. Decentralized Units

Several forces might compromise an evaluation’s independence: external or internal pressure to not disclose certain findings; the withholding of program documents or sources from the evaluators; the evaluators’ self-censorship to not offend colleagues; and the evaluators’ concern that findings might negatively impact future job prospects. To ensure independence, several donor agencies have separated the central evaluation office from the rest of the organization. For example, the Independent Evaluation Group at the World Bank reports directly to the board of directors.

However, many donor agencies have decentralized project evaluations for field staff although there is some concern that this “moves the responsibility for the quality of a large portion of evaluations in a donor’s portfolio to individuals who generally have limited evaluation training”. Although evaluations are often carried out by or in collaboration with consultants, the program officers need to have a good understanding of evaluation techniques to draft the statement of work. Evaluations carried out by central evaluation units appear on the whole to be of better quality than evaluations commissioned by regional or programmatic donor units.

Credit: Centre for Development Impact

Credit: Centre for Development Impact

A study reviewing a sample of 25 evaluations commissioned by the USAID’s democracy and governance (DG) program found that the evaluations needed major improvement. The review found that most evaluations provided insufficient information of sources used to assess the reliability of the information underpinning the evaluators’ findings. Furthermore, the review found that evaluation reports frequently failed to provide detailed information, beyond notions such as ‘strengthening civil society’, about what activities had actually occurred. When the reports were more detailed, they tended to focus on immediate output of the undertaken activities (e.g., 200,000 how-to-vote brochures with illustrations were produced) rather than what outcome these activities actually had (e.g., did the voter brochures affect voter participation?).

The Benefits and Challenges to Conducting Joint Evaluations

Over the last 15 years, the number of joint evaluations has significantly increased. The increased interest in coordinating joint evaluations has been propelled by the broader development agenda focusing on donor coordination and sector-wide approaches, aid effectiveness, and results. Joint evaluations are particularly useful when there is a high concentration of donor activities or when there is a need to evaluate effectiveness of assistance funded through basket or general budget support. Moreover, joint evaluations can also facilitate evaluations of more controversial development issues or mitigate evaluation fatigue in host countries.

Thus, joint evaluations, such as the one for Southern Sudan, present a holistic picture of what efforts worked well or not so well. For example, by reviewing all donor-funded programs, the evaluation found that community reconciliation and peacebuilding efforts were isolated events that lacked links to national initiatives and were characterized by poor monitoring and follow-up. A review of close to 700 evaluations found that 75% of the evaluations were single donor reviews, 7% were joint evaluations with another donor, and 15% were joint evaluations with a partner country.

Actual cooperation among donors is still limited, and when it occurs, it takes place primarily between a smaller group of like-minded donors, such as the Scandinavian countries, the Netherlands, and DFID.  Because joint evaluations require significant coordination between the parties involved, they tend to become more expensive and time-consuming than single donor evaluations.  Moreover, the relevance of the topic under review might be time-sensitive for some donors, so the timing of the evaluation process becomes more challenging. To delegate responsibility for the planning of an evaluation, partners in a joint evaluation have to trust each other. Finally, geographical distances, language barriers, and domestic public procurement requirements might further complicate the ability of donors to collaborate on joint evaluations.

The Different Approaches to Joint Evaluations 

There are different approaches to joint evaluations. For example, a joint evaluation assessing anti-corruption support conducted evaluations of individual donor projects and programs in five different countries in order to compare the support across donors and countries. A different strategy is to review already existing evaluations commissioned by multiple donors in a particular subject area. Synthesis reports distilling the key programmatic and methodological lessons learned from evaluation reports, across donors but within a particular subject area, are highly useful as few donor officials read other donor agencies’ evaluations. When officials from various donor agencies are engaged in the compilation of a synthesis evaluation, more donors might actually read the report.

Donor-Country Partner Evaluations: Working with Local Consultants vs. Country Representatives 

To strengthen the partner countries involvement in the evaluation process, many donors have recently reviewed their evaluation policies to jointly assess aid effectiveness with partner countries. Although there are good examples of joint partner-donor evaluations – evaluations where the partner country is truly involved are still unusual. In fact, a recent survey among the DAC members’ central evaluation units, 15% of their evaluations were joint evaluations with a partner country. However, most of the ‘partner participation’ did not involve representatives from the government and it was rare that partner countries were engaged in the planning, development, or follow-up phases of the evaluation process. Instead, the most common way donors incorporate country perspectives into the evaluation reports was to hire local consultants. Although local consultants add local knowledge to an evaluation, they are consultants and not representatives of the partner country. Furthermore, local consultants might be hesitant to pinpoint weaknesses in a program as they might be too closely linked to the program or wish to be hired for future assignments.

The Difficulty in Searching for & Accessing Evaluation Reports

Many donor agencies have made their evaluation reports available on-line, but these on-line databases are difficult and time consuming to search and do not contain all evaluation reports. In fact, program officers within larger donor organizations are even finding it difficult to learn what similar programs or evaluations the agency might have conducted in different geographical or thematic areas.  A second related issue that makes it challenging to locate past program documents is that documents are misclassified or that the database has inadequate capacity to delimit searches.

Studies have found that employees of development agencies rarely find the time to read evaluation reports commissioned by their own agency and even less so by other agencies. Some consider formal evaluation studies to be ineffective or too long and too technical to read. In general, little knowledge of lessons learned from past projects and programs is transmitted by reading evaluation reports. According to one study, DFID employees found that they often had to ‘reinvent the wheel’ because information was not adequately transmitted during and following staff rotations.

The Preferred Method for Stakeholders to Absorb Lessons Learned 

For staff and other stakeholders with limited time to read evaluation reports, dissemination seminars or workshops are the preferred way to get information about the lessons learned. Face-to-face meetings and the possibility to have a dialogue about the findings is an important factor for promoting the use of the findings. Research has found that staff members are more likely to take in findings and recommendations conveyed in evaluation reports when they are involved (without interfering with the independence of an evaluation) or kept informed about the development of the evaluation process.

Credit: ILRI

Credit: ILRI

Why Organizations Don’t Pursue Evaluations on Lessons Learned

Several studies have found that evaluations are most commonly used to legitimize on-going programs or to phase out support. A third avenue, which is less frequently mentioned, is to assess past lessons when new programs are designed. An illustrative case is the unofficial pressure within donor agencies to move and disburse money. A study conducted by Sida found that 40% of the annual disbursements took place in the last two months of the fiscal year. Staff members were rushing to disburse funds at the end of the year due to the fear that extra funds would not be re-budgeted the following year. However, this pressure created a funding bias towards the renewal of on-going projects rather than a review of past performances of multiple other projects to initiate a new project or program based on lessons learned.

To address concerns that evaluation reports are too long and time consuming to read, several development agencies have started to produce evaluation briefs and thematic synthesis reports. For example, the European Commission contracted a thematic evaluation of its support to justice and security sector reforms. To encourage greater internal and external use of evaluation reports, several agencies have started to produce and disseminate shorter evaluation summaries. For example, in 2011 the former Swedish Agency for Development Evaluation (SADEV) launched an evaluation brief series. One of the evaluation series’ briefs provides a two-page summary of the main findings of a larger evaluation of Sida’s support to justice in reconciliation processes.

This report focuses primarily on practices within certain unilateral or multilateral donor agencies. There is little documentation on how contractors, local and international NGOs, or partner countries engage with evaluations and acquire knowledge about lessons learned.

To read more of the recommendations from the research report, click here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Information

This entry was posted on September 29, 2013 by in General, Reports and tagged , , .

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 298 other subscribers

Follow Reinventing The Rules on Twitter

%d bloggers like this: