Please note: You are using an outdated version of Internet Explorer. Please update to IE10 here to properly experience the ATI website.

Further Findings – #10.

Why we do not produce country average scores
Back to Overview

Those who would like to see how different countries measure up in meeting their aid transparency commitments will often ask why we do not produce average scores for country or donor groups in our ATI. Likewise, donor organisations which are keen for the ATI to take into account ‘whole-of-government/institution’ approach would like their score taken as an average – so instead of measuring USAID, MCC and Treasury, we would simply present one ranking for the U.S.

Unfortunately the reality of how aid is delivered (and reported) makes it difficult and practically unfeasible to produce country or donor rankings.

Here are the two main reasons why:

1. Difference in the number of agencies assessed within each country/donor group:

In 2013, multiple agencies were assessed for the European Commission, France, Germany, Japan, UK, United Nations, U.S. and the World Bank. Why do we do this for some countries but not others? This is because aid delivery is often highly fragmented (it is not uncommon for countries to have 10+ organisations delivering aid) and not all donors currently report aid information comprehensively to a single system.

There is no simple way of answering the questions of how many countries/ organisations provide resources for development cooperation, and how many different agencies within these donor groups are responsible for aid delivery. Even if we were able devise a clever way to identify every single agency that delivers aid, we would not be able to assess them all due to internal resource constraints. So we use a set criteria to decide which agencies to include. Based on these criteria, the ATI assesses more than one agency for large donors (spending more than USD 10 bn per annum) with multiple ministries or organisations responsible for significant proportions of ODA. For others, we select the lead agency responsible for delivering or reporting aid.

This is not to say that none of these donors have fragmented aid delivery (in fact many do), just that the other agencies have limited influence. In order to create an “average score” or “aggregate ranking” to fairly reflects each country/donor’s level of aid transparency, we would need to take into account the proportion of the country or organisation’s total aid envelope being implemented by each separate agency in order to weight the scores against the spending. Unfortunately this information is often unavailable.

2.Variation in the performance of agencies:

Agencies belonging to a single donor country or group are not always alike.

  • Agencies often retain a large amount of autonomy in deciding how much information they make available and have different publication approaches, and should therefore be held accountable for them.
  • There is often wide variation in the amount and quality of information made available by different agencies in a single country or multilateral organisation. It is helpful and instructive both to understand and to compare their performance.  It also helps identify opportunities for peer-to-peer learning.

It is unclear how we would aggregate agencies into a single country or organisation score in a way that reflects wide variations in performance. The ATI score would not be accurate or meaningful if high performing agencies within a country were pulled down by lower performing agencies; similarly, lower performing agencies should not have their poor performance masked in an average score.

For example:

  • If all U.S. agencies’ levels of transparency were averaged to provide a single score, it would be 42.1% in 2013, placing the U.S. at the bottom of the fair category despite the high scores of 88.9% for MCC and fair performances from USAID and Treasury (OTA).
  • The UK would be similarly placed in the fair category with an average of 43.4% (the median score of 34.7% would have placed it in the poor category). Such an average score would not only be unfair for DFID which performed very well with a high score of 83.5%) but would also inflate the score for the MOD which performed very poorly with a low score of 12%.

Clearly, the incentives set by producing country average scores could be highly skewed. Ranked separately, it is possible to see the variation in the different agencies’ performance, including the common indicators they collectively perform well or poorly on.  It also hopefully provides some helpful analysis of where individual agencies put their efforts to improve.

>