Please note: You are using an outdated version of Internet Explorer. Please update to IE10 here to properly experience the ATI website.

Further Findings – #01.

Overall scores
Back to Overview

The 2013 ATI results demonstrate that there is a leading group of organisations publishing large amounts of useful information on their current aid activities. The top ranking agency is U.S. MCC scoring 88.9%, while China takes the last place scoring only 2.2%. At the top end, MCC (88.9%), GAVI (87.3%), UK DFID (83.5%) and UNDP (83.4%) are all nearly 10 or more percentage points ahead of the next highest donor. The average score for all organisations is comparatively low at 32.6%, with 40 organisations scoring less than the average score. As in previous years, larger organisations generally perform better overall. Multilaterals as a group tend to score higher than bilaterals, although the performance of individual organisations within each group varies significantly.

Several organisations including the AfDB, Canada, EC ECHO, EC Enlargement, EC FPI, GAVI, Germany, UNDP, UNICEF, U.S. MCC and U.S. Treasury have made big improvements in 2013 by publishing more information in accessible and comparable formats such as IATI XML or CSV, leapfrogging others that have not made any significant changes to the amount of information they publish, or publish in less useful formats such as websites or PDFs. The top 27 agencies all publish at least some information in IATI XML. Some IATI publishers fall into the poor category, however, because they are not publishing enough current or comprehensive information in IATI XML or in other formats.

The table shows the average scores for each indicator category (commitment, publication at organisation level and publication at activity level) for organisations placed in the five different performance categories (very good to very poor). The biggest difference is at the activity level, with organisations in the ‘very good’ and ‘good’ categories publishing information on their current activities far more consistently that those in the remaining three categories. As in 2011 and 2012, it is likely that some organisations may be scored too highly due to the survey sampling methodology of selecting information for activities in the organisation’s largest recipient country.

The need to use purposive, rather than random, sampling means we cannot be sure whether the sampled information is truly representative. Neither random sampling nor the selection of an ‘average’ activity for each organisation are possible without knowing about all of the activities that donors are implementing in a particular country, and having that information in a structured, machine-readable format. However, including automatically assessed IATI data this year for the first time (where all activities, and not just a sample, are assessed) has hopefully mitigated these issues to a large extent for the 27 organisations that are publishing at least some current activity-level IATI data.

>