The challenge of learning in aid: Is accountability the problem?
The problem of learning in aid evaluation has been debated for decades – but it is still as challenging as ever. A new research report explores how learning and accountability in practice are hard to reconcile.
Aid evaluation is normally intended to serve two important purposes: accountability and learning. Yet these have diverging purposes; accountability means ensuring that the resources are well spent, while learning means acquiring new knowledge from experiences and evaluations. A study recently completed at UiO argues that learning is in practice very difficult to reconcile with accountability.
Kristian Bjørkdahl (SUM, UiO), Desmond McNeill (SUM, UiO) and Hilde Reinertsen (TIK, UiO) reviewed a substantial literature, mapped the history of evaluation in Norway and Sweden, interviewed key figures in Swedish and Norwegian aid evaluation, and examined a selection of evaluation reports to arrive at this conclusion.
– While learning may happen ‘on-site’ for those practically involved, it is much harder to synthesise and transfer – to achieve so-called ‘big learning’, says Hilde Reinertsen. The further removed from the field or the practical evaluation process, the more difficult it becomes. There are exaggerated expectations of what evaluation may solve, continues Reinertsen.
Contradictions in aid evaluation
Both Norad, Sida and other donors state that the “twin purpose” of evaluation is accountability and learning. However, in their official manuals and publications, they do not acknowledge how these may be in tension, or perhaps even in contradiction.
In their analysis, the research team found that such tensions and contradictions constantly arose in practice. Often, emphasising one purpose would come at the expense of the other. This led the team to conclude that in practice, the dual purpose involves fundamental trade-offs.
Accountability at the expense of learning
Examples of such trade-offs come from the many choices evaluation managers has to make: between building internal trust (among staff) or external trust (in the public); between formal, independent evaluation processes and informal, embedded processes; and between using external consultants as constructive facilitators of learning processes and as critical external auditors.
Aid evaluation managers may therefore work hard to enable learning-oriented evaluation processes, yet in practice, the concern for accountability still came at the expense of learning. This was evident in the team’s rhetorical analysis of evaluation reports.
“Many of the reports we read were relatively strong on description/analysis”, says Kristian Bjørkdahl. “This makes them relatively useful for accountability purposes. However, they were weak on lessons learned and recommendations. The link between them was often lacking. This makes learning far more difficult.”
Aid is on the defensive
To explain how learning may suffer even when evaluation staff actively seeks to ensure it, the team points to the wider evaluation system. Here, resource constraints is one key challenge: Both those commissioning, writing, reading and using the reports often have very little time. In addition, rapid staff turnover and the difficulty of synthesising evaluation findings makes it hard to translate knowledge into action.
Finally, the broader political context of results-based aid management, which demands aid staff to document results of how funds are spent, may run counter to open learning-oriented processes.
“Aid is a risky business – so it is easy to criticise. And aid evaluation is always, in part, political. When aid is on the defensive, accountability may be emphasized, while learning suffers,“ says Desmond McNeill.
Four challenging choices
They conclude their report with four challenging questions:
- Does the evaluation process really need an evaluation report – and if so, what kind?
- Does it benefit from an external evaluation team?
- Would it be better if the report did not include recommendations, but rather left these to be developed by those concerned - on the basis of the analysis in the report?
- And should the concern for accountability be further strengthened by donors, even when it comes at the expense of learning?
Finally - addressing their recommendations to all those involved in doing and discussing development aid and aid evaluation – they recommend that we must adjust our expectations to both aid interventions and aid evaluations, and talk openly about the practical trade-offs between accountability and learning.
The conclusion in the report has been received with both applause and criticism from the aid evaluation community. “We realise these may be controversial findings, but we hope that our report may lead to a constructive debate in Norway – as well as in Sweden”, says the team.