2012 Aid Transparency Index

Data & Methodology

Data

Download the data for the 2012 Aid Transparency Index:

Download the report

2012 Aid Transparency Index (PDF, 7.7MB)

Methodology

This annex sets out the approach taken to developing the 2012 Aid Transparency Index, including the methodology, data collection and the weighting and scaling of the Index.
In 2010 and early 2011, a number of assessments of the transparency of aid agencies were published, including the Center for Global Development/ Brookings Institution Quality of ODA report,102 Brookings’ Ranking Donor Transparency in Foreign Aid,103 AidWatch’s 2010 Annual Report104 and Publish What You Fund’s 2010 Aid Transparency Assessment.105 A common challenge faced by all of these research projects was a lack of comparable and primary data on levels of aid information which constrained an accurate and specific assessment of aid information levels.

The methodology piloted in the 2011 Aid Transparency Index was developed in response to this finding in the 2010 Assessment. Having identified that a lack of current, primary data was a significant barrier to measuring aid transparency objectively, we shifted away from using proxy indicators based on secondary data sources to collecting the primary data ourselves, in partnership with 49 CSOs. In the 2011 Pilot Index, this new primary data was used to assess the availability of 37 specific types of information, or indicators, grouped in three different levels – organisation, country and activity/project. The number of organisations assessed was increased from 30 to 58 and included bilateral and multilateral donors, development finance institutions and private foundations. The resulting ranking was derived by assigning scores for each of the 37 indicators and grouping them by level.

The 2011 Index was explicitly a pilot and findings for certain indicators suggested a need to improve the methodology for 2012.106 However, an important outcome of the 2011 pilot was also the development of an evidence base which can be used to monitor donor progress regularly over time. Although there have been some minor changes to the methodology in 2012, primarily relating to new indicators and indicators that have been moved to a different level, the majority of the indicators remain the same, making it possible to compare individual donor performance with 2011.

This section sets out the details of the methodology and data used in the 2012 Index, reflects on the limitations and challenges faced in 2011 and 2012 and discusses how the methodology may develop in the future.

Who: 72 separate organisations or entities which provide aid were included. These ranged from traditional bilateral donors representing 37 countries (including all DAC members) to multilaterals, including development banks, four UN agencies, two health funds, three climate funds, and two private foundations.

What: As in 2011, the methodology assesses donors’ aid transparency at three separate levels – organisation, country and activity/project. 43 indicators of transparency were used, compared to 37 in 2011. Of these 43 indicators, one looks at the quality of Freedom of Information legislation; one measures engagement with IATI; and the remaining 41 were selected using the information types agreed in the IATI standard, most of which are based on the DAC CRS. They represent the most commonly available information items where commitments to disclosure already exist. The data for these indicators was collected and checked via an evidenced survey. There are six new indicators in 2012, two of which are not based on the CRS but are used to identify the format and comparability of the organisation’s data. Section 2 provides the full list of indicators, survey questions and the definitions used.

How: The majority of the 41 specific information types were searched for in surveys, initially undertaken by donor country-based CSO or national CSO platforms, or a CSO with a particular interest in that organisation or agency.107 Where no organisation could be found to complete a survey, Publish What You Fund undertook the work. The initial survey findings were then sent to the organisation or donor agency for an iterative process of verification and correction (see the Acknowledgments section for details of who undertook each of the surveys and which donors reviewed them). Results were then re-checked and standardised across indicators.

When: The data collection period ran from 1st May–31st July 2012. Initial data collection occurred in May–June; donor feedback took place over a staggered three week period, from late June to early August. Further data verification, standardisation and cleaning then occurred in August 2012, before data analysis in late August and early September.
The approach was designed to sample and collate data about the publication of key types of current aid information for each donor and agency in ways that generate a comparable, robust data source that is specific, detailed and verifiable. “Current” was defined as published within the 12 months immediately prior to the data collection period, so information published on or after 1 May 2011 was accepted as current.

Donor country and entities selection: We have extended the number of organisations covered in 2012 from 58 to 72. Organisations were selected based on their size (amount of ODA given)108 and as the major spending agency for that country; their combined size (for donors with multiple ministries responsible for significant proportions of ODA, such as France, Japan and the U.S.); or because they are included in country or organisation-wide aid transparency commitments (such as the UK, EU Member States, IATI signatories109 and Commonwealth Member States that provide aid).
Three climate finance funds have also been included in 2012, primarily to gauge how much information on funding for climate action is already accessible and what is currently being captured through aid information.

1. Data collection method: Surveys were initially completed by CSOs. Survey respondents were asked to search organisations’ websites, documents and databases to find proof of the existence and availability of information in the form of a URL or link to it.110 Information published in any language was accepted, although it is preferable for accessibility if it is in a language widely used in the relevant recipient country. However, language did not affect whether an indicator was scored.

2. Aid recipient country and activity selection: CSOs selected the current largest aid recipient country for that aid agency. If the current largest recipient country of aid from the agency was not known, the current largest recipient country of aid from the donor government as a whole was selected. If this was also unknown, the most recent OECD DAC figures (2010) were used to find the aid recipient to survey. Within the recipient country, three projects were then selected within that country programme.

3. Data collection: The approach to finalising the survey was an iterative process of searching, evidencing and checking the availability of information. Survey respondents were asked to answer questions on the availability of 41 specific types of information necessary for meeting the international best practice standard for aid transparency, at the organisation level (nine indicators), at recipient country level (seven indicators) and the activity or project level (25 indicators). The list of survey questions was designed to examine the availability of information at all stages from policy to implementation, including design, evaluation and audit.

The questionnaires were filled in by exploring donor organisations’ websites to find proof of the existence and availability of information. This was evidenced by submitting the URL or link to that information. It was also recorded in the data collection whether the information was always or only “sometimes”111 available and whether it appeared that the organisation actually collected that information item, even if it was not published. This data was not used in the weighting or indexing. The full dataset of all the items found to be collected, sometimes or always published for each organisation can be found in chart 12 in Annex 2 and also on the Publish What You Fund website: http://publishwhatyoufund.org/index

4. Data verification: Responses to the surveys were reviewed and links checked by Publish What You Fund to ensure that all findings were evidenced and standardised across the surveys. In order to establish if information was always published, Publish What You Fund selected a minimum of five activity level projects in the relevant recipient country in order to ascertain whether this information was consistently available. If information was not provided for an answer then an additional search of agency websites in English and the local language was conducted. If there was a difference in the amount of information provided in English compared to the local language then whichever provided the largest amount of information was selected.112

The surveys were then returned to the CSO that had completed them to check and return to the relevant organisation or agency. Surveyed organisations were given a period of three weeks in which to reply, but replies were still accepted and actively sought for another two weeks. For 27 organisations, however, no response was received.113 In those cases, Publish What You Fund reviewed the survey for a second time and conducted more extensive searches for each question.

Publish What You Fund’s verification and standardisation process included checking the evidence provided in all the organisation surveys (website URLs) to ensure that all scores of “published” data were completely accurate. In several cases the URL provided as supporting evidence did not show the information suggested, so the results were downgraded to either “sometimes” published (if the information was published only for a few projects), or just “collected” if the information was not publicly available for any projects but the organisation suggested that they did hold that information through their response. During this process, additional qualitative data was used to inform the individual organisation profiles in Section 4. This included:

  • the format that the information was provided in (project database, PDF, website),
  • where the information was provided (a central donor website, country-specific donor website, embassy website),
  • the language of publication (donor’s language, English, French, Spanish, etc.),
  • any other interesting features in the way the data was provided.

A round of standardisation of scoring and what was accepted as answers was then conducted across all indicators and organisations. Finally, a round of checks were conducted on specific indicators relating to comprehensive database of activities (indicators 11 and 18) because it was not possible to score positively for indicator 11 (a centralised, online public database of all the organisation’s activities in all countries) and not score positively on indicator 18 (centralised, online, public database of all the organisation’s activities in this recipient country).

Scoring the indicators

For the 41 surveyed indicators, the information availability was judged by whether a specific piece of information was found to be:

  • Always published (scored 1):
    For organisation and country level questions: consistently or regularly;
    for the activity level questions: for all projects in the recipient country.
  • Sometimes published (scored 0 but used for sequencing of equal rank):
    For organisation and country level questions: inconsistently or irregularly;
    for activity level questions: for some projects in the recipient country.
  • Not published, but collected (scored 0):
    Where the information is not publicly available but the organisation did appear to collect it
  • Not collected (scored 0):
    In some cases the organisation stated that either it did not collect the information, or the survey respondent did not know and the organisation did not confirm whether they collected it or not.

The only results used for the purposes of scoring the Index were where information was always published. These were scored 1. All other responses were scored 0. The full dataset is presented in chart 12 in Annex 2.

To establish that information was always published, the survey respondent selected a minimum of three activity level projects in the relevant recipient country in order to ascertain whether this information was consistently available. When checking and verifying the surveys, Publish What You Fund checked that they were representative for a further five projects in the same country. The donor was asked to confirm whether the responses were representative. Despite the checking process undertaken by donors, we have the least certainty about the “not published” category, which by definition cannot be verified independently as it is not public.

At the organisation level an additional two indicators were used as proxies to assess the commitment to aid transparency and accessibility of aid information. These were:

  1. Quality of the organisation’s Freedom of Information Act or equivalent disclosure policy; and
  2. The organisation’s engagement with the International Aid Transparency Initiative (IATI).

Indicator 1 – Quality of Freedom of Information Act
As noted in the 2011 Pilot Index, the binary indicator for Freedom of Information Acts (FOIA) was not sufficient because not all legislation or disclosure policies are of the same standard; nor are they implemented to the same extent. At the time, there was no systematic analysis of FOIA quality that could be used as a data source for the Pilot Index. Since then, however, the Centre for Law and Democracy and Access Info Europe have published the Global Right To Information (RTI) Rating which provides a comprehensive analysis of FOIA quality.114

The RTI Rating scores the strength of the legal framework in guaranteeing the right to information in a country. Based on a 61-indicator survey, the legislation is graded on a 150-point scale. This has been adapted to the framework used for scoring the other indicators (apart from indicator 2; see below) used in the Index. For more detail on how this methodology was developed, including for development finance institutions, see Box 6 on page 18.

Indicator 2 – Engagement with IATI
Engagement with IATI was selected as a proxy for commitment to aid transparency and the format and accessibility of the information. IATI is specifically designed for the comprehensive publication of current aid information in a format that is comparable and timely as well as accessible, because it is produced in a machine readable format. Donors can score a maximum of two points depending on their level of engagement with IATI, which is calculated from 0-3, with the points then redistributed proportionately. The scoring used is as follows:
3 = Publishing to IATI – has begun publishing current data to the IATI Registry.115
2 = Implementation schedule – has published an implementation schedule but has not yet begun publishing to the Registry; or the published data is not current (more than 12 months old).
1 = Signatory – has signed IATI but has not published an implementation schedule or published to the Registry.
0 = No engagement to date – has not signed IATI or published to the Registry.

Surveys and the two additional FOIA and IATI results were collated for all the 72 donor organisations – see chart 12 in Annex 2 for the full dataset.

Weighting, scaling, ranking and grouping

Different weighting and grouping options were considered in consultation with our peer reviewers.116

Weighting: As in 2011, giving each of the three levels an equal weight of 33.33% was chosen because different levels of transparency are important for different types of information users. We decided that no level should have a higher weighting than any other. While different groups and constituencies will require and value the various aid information types differently, the emphasis has been on keeping the weighting as simple and clear as possible. The weighting approach is shown in diagram 1. A tool is provided on the Publish What You Fund website which allows you to reweight the data in line with your prioritisation and assessment of the importance of different types of information:

http://publishwhatyoufund.org/index

Scaling: A common aim of the 2011 and 2012 Indexes is to capture actual performance and progress over time. This guided the decision not to rescale the indicators and to give all levels an equal weighting. Scaling would disguise actual performance of organisations in favour of ensuring that each level shared the same average. The decision not to rescale each of the three levels means that the average score for each level is different. At the organisation level it is 53%; at the country level it is 35%; and at the activity level it is 35%. In Sections 3 (Results) and 4 (Organisation Profiles) we include some analysis of donors’ performance against the average for each level. Sensitivity analysis suggests that the Index ranking is not unduly affected by performance on any particular indicator.

Ranking: Based on the three weighted levels, the overall ranking of the 72 agencies was then developed. Any donors that scored exactly the same would have been ranked equally, but with “sometimes” answers used to visually sequence organisations with equal scores. This approach was necessary in the 2011 Pilot Index but, in 2012, no donors scored the same.

Grouping: The five ranking groups, ranging from good to very poor, have been used again in 2012. This provides a mechanism to compare donor performance within specific score ranges, without over-emphasising minimal differences in scores. As in 2011, the scores of 0–19%, 20–39%, 40–59%, 60–79% and 80–100% were chosen, partly for consistency and to facilitate comparison between 2011 and 2012; and partly to enable analysis of the performance of all 72 organisations in relation to each other.

The three levels are weighted equally in thirds. Questions grouped under the levels are weighted equally within each level, based on scores of 1 or 0, apart from quality of FOIA and engagement in IATI (see Box 6 and p.104 for more on how these two indicators are scored). As in 2011, the decision was taken to double weight the IATI indicator as it is a proxy for both commitment to aid transparency and the format and accessibility of the information.

Challenges, limitations and lessons learned

Addressing challenges from the 2011 Index

The 2011 Index was explicitly a pilot. In 2012, we have built on the methodology, taking into consideration the challenges and limitations that we faced in 2011 and any lessons learned, particularly in relation to definitions for certain indicators and what we accept as “always” published. The following issues remain:

  • Donor organisations not covered. Although we have added 14 new organisations in 2012, bringing the total up to 72, the coverage of agencies is still by no means comprehensive. A significant constraint is capacity inside Publish What You Fund and finding CSO partners with the required time and capacity to undertake the surveys. Nevertheless, we have begun to address some of the larger gaps – the UN system, for example, where three agencies are covered in the Index in 2012 (UNDP, UNICEF and OCHA) – and are now capturing a large proportion of development finance institutions. The dataset, methodology and data collection platform are open and free for others to use. We encourage other organisations and researchers to further expand this coverage and focus on donors, sectors or countries that they are particularly interested in; for example, all donors operating in fragile states or all donors providing funding to the water sector or climate finance. We welcome feedback on this suggestion.
  • Representative nature of an organisation. We have attempted to address this challenge from 2011, where, in a number of cases of highly fragmented donors, one or two agencies (or departments) were surveyed but these agencies only covered a relatively small proportion of aid spent by that donor overall. These results were not a particularly good proxy for the whole of the donor’s aid transparency. Consequently the 6agency or organisation is always specified. The ranking is also made on the basis of agencies rather than countries. This issue particularly applies to China, France, Japan, the U.S., the European institutions and UN agencies. This year, we added two French ministries (alongside AFD), MAE and MINEFI; we added a fourth European department, FPI, in addition to DG DEVCO, DG Enlargement and ECHO; we assessed Japan’s MFA in addition to JICA; and DECC, FCO and MOD in the UK, in addition to DFID.
  • Country versus agencies. We received feedback from some donors that we should not be considering agencies separately, but should rather consider that donor as a whole. We opted to maintain the disaggregation of agencies for several reasons. First, no two agencies in this Index score the same. There is often wide variation in the amount of information made available by different agencies in a single country. Second, agencies often retain a large amount of autonomy to decide how much information they make available, and should therefore be held accountable for that. Third, high performing agencies should not be pulled down by lower-performing agencies, and lower-performing agencies should not have their poor performance masked in an average. Finally, it was unclear how we would aggregate agencies into a single “country” score in a way that reflected wide variations in performance in a country. For example, if all UK agencies’ levels of transparency were averaged to provide a single score, it would be 42.1%, placing the UK in the moderate group (its median score would have been 26.1%, placing it in the poor category) despite the high score of 91.2% for DFID, which accounts for 90% of UK ODA. Ranked as five separate agencies, it learnedis possible to see the variation between their performance and which common indicators they collectively perform well or poorly on. Moreover, it would have been necessary to take into account the proportion of a country’s aid delivered by each separate agency in order to create an aggregate country ranking that fairly reflected that country’s level of transparency. This information is not always available.
  • Similarly, is it not clear how representative the activities assessed are. The Index methodology will continue to be constrained by the fact that, for most donors, it is not possible to randomly sample typical projects. Precisely because information is usually either not published systematically, or else is only available as unstructured data, it is difficult to calculate what a “typical” project is. There are two ways of approaching this challenge: 1) To look at all published projects for that donor and try to calculate the average based on the information they make publicly available; or 2) to ask the donor to clarify what an average size project is and provide the details for how this figure has been calculated. Option 1 would create an unfeasible increase in the resource intensity of each survey – when multiplied by the large number of donors now included in the Index, it would make the process impossible. Option 2 would not provide independently verifiable data, and there is a risk that responses would not be received from all donors, meaning that two different methodologies would have to be used for activity selection. We recognise that the methodology used is not ideal but, of the options available, it strikes the right balance while information is not available in a structured format. See p.111 for a discussion of possible future changes to the methodology.
  • The information types assessed do not constitute a comprehensive list of all the information and data donors collect or make available. However, feedback from one peer reviewer suggested that the number of indicators we are using is high, and that we should in future look to reduce the number of indicators. We could instead rely on a smaller number that are representative of donors’ performance across the existing set of indicators. We will consider this as part of possible future changes to the methodology (see overleaf).
  • Donor organisations did not to respond to cross-checking the survey results. Some organisations did not respond to the survey results sent to them. Brazil, the Clean Technology Fund, Korea-EDCF and USAID all replied but declined to comment on the survey answers. No response was received from 23 donors (see footnote 113).
  • The finding on the levels of “information collected but not published” is the most problematic of our data. For a number of cases, donors did not respond and instead the judgement that an item was collected was based on existing knowledge by the respondent. However these responses were not used for scoring and ranking levels of individual organisation transparency. Some broad trends could be seen in the chart in Annex 2; however, as we are not content that any conclusions are likely to be sufficiently robust, we have excluded any additional analysis of this data in the 2012 Index.
  • In the 2011 Pilot Index, we noted that the survey did not look at the format each information item was provided in. This was only explored during the verification process by Publish What You Fund. In 2012, we included specific questions on data format in the survey. These questions have not been used in the scoring or ranking, partly because of the wide variation in respondents’ familiarity with data formats. Yet the point made in 2011 remains: format is important. Information that is provided in a machine-readable format (e.g. CSV, XML or Excel) is more useful than if the format available is solely free text or a website. PDFs, which are not machine-readable, are particularly difficult to extract information from. In 2013, we will look at the format in which information is provided more closely. See p.111 for a discussion of possible future changes to the methodology.
  • We have included a new indicator to ask which languages the information was made available in. The responses to this indicator were not used to score or rank donors, as it was not clear how we would apply broad-brush responses to this question (“information is generally available in…”). We will consider how to measure the amount of information available in partner country languages as part of future changes to the methodology.
  • Poorly designed and hard to navigate websites continue to be a problem for collecting the data used in the Index. Responding to this problem in the 2011 Pilot Index, we included another free-text question for 2012 asking respondents how easy the information was to find. This open-ended question solicited a variety of responses, all of which can be seen on the Index website.117 We have also included some of the more interesting agency-specific observations in Section 4. Possible future changes to the methodology could build on the responses we received to this question to design some tighter, more comparable questions, but this qualitative data was also useful in itself.
  • Contracts are not always provided in as much detail as would be desired. Although we would like all contracts to be published in full, most organisations that do publish contract documents have only published part of the entire agreement. Given the binary nature of the Index indicators, not accepting summary contracts would have led to these organisations receiving no points for this indicator. In order to encourage publication of at least some contract information, we have awarded points in the 2012 Index to all organisations that have published contracts, even if these are only partial. Additionally, tenders and contracts are often stated in separate sections of the donor’s website, making it difficult to link that information back to individual, specific projects or activities. The comprehensiveness and quality of documentation is a difficult issue in general, which we will seek to address in the methodology in future.
  • Exemptions are not addressed in this Index. We recognise this as problematic because there are often legitimate reasons for excluding specific information items (or sometimes entire projects) from publication where publishing such information may cause harm. The principle should be that exclusions are transparently stated, and at a low-enough level to allow exclusions to be challenged where they do not appear to be warranted, while at the same time ensuring that the purpose of legitimate exclusions is not compromised. However, no method for publishing this information yet exists – including in IATI. Over the coming year, we will encourage publishers to pilot an exclusions extension to IATI but we do not anticipate this problem being fully addressed by the time of the 2013 Index.
  • Comprehensiveness of activity-level data is a related problem. The Index relies on several steps to determine whether all or only some projects are published. First, the initial respondent selects three projects to see whether information items are published consistently for those projects. Second, as well as verifying those responses, Publish What You Fund checks a further five projects to see if those findings are more widely representative. Third, the donor is asked to confirm whether all or only some projects are published. Finally, the presence of a series of trigger words is noted. For example, if it is stated that the project information published is for “case studies”, “some projects” or “selected projects”, then it is assumed that the maximum score for any of the activity level questions should be “sometimes” rather than “always”. As discussed above, these steps are imperfect because without information being published in a structured, machine-readable format, it is not possible to determine comprehensiveness. We will begin to address this problem next year. See p.111 opposite for a discussion of future changes to the methodology.
  • Data was collected within a specific time period, meaning that progress by donors since 31st August 2012 in relation to their aid transparency may not be reflected in the ranking.

Grouping of donors

Section 4 groups organisations by type, in order to display them alongside peers who may face similar challenges in implementing aid transparency.

Development finance institutions have been grouped together, rather than as bilateral or multilateral agencies, partly on the basis of feedback to the 2011 Pilot Index. Separation into groups is primarily to facilitate comparison of performance across similar sorts of organisations. However, we recognise that it is difficult to classify many of these organisations under a single category as many have multiple purposes, models and roles. This approach will be reviewed and revised for the 2013 Index after forthcoming analysis of the categorisation of aid agencies.

Scoring all donors for all indicators

It was decided to score all organisations on all indicators and organisations were ranked accordingly. All of these organisations – bilateral agencies, DFIs, multilateral institutions and so on – are worth assessing together as they have an explicit development or poverty reduction mandate, mostly represent official external financing and all have an impact on recipient countries and actors. They should, therefore, be held to a common set of standards, within or without “official development assistance” flows.

Not all donors have or collect all the information that we ask about and so they cannot make it available. For example, some DFIs have highlighted that because they operate in the private sector, they do not have Memoranda of Understanding with governments of recipient countries. It could be argued that they should not be expected to have such agreements or be downgraded in the Index as a result of not publishing a document that does not exist. We have carefully considered this issue in relation to the wide variety of donors that are included in the Index and have concluded that it is not unreasonable to score all donors equally on whether or not they publish MoU-type documents. See below for a more detailed explanation of what we have accepted for the indicator on MoUs.
In addition to MoUs, some organisations have cited the difficulty in providing forward budgets when they do not set their own budgets, or publishing procurement procedures when they do not directly contract or implement activities. In such cases, we do not make exceptions based on the type of donor or the type of information, but we do make efforts to ensure that the information captured is fair and appropriate for that donor and accept appropriate documents that serve similar purposes to those set out in the indicator. For example, indicative three-year figures disaggregated to the level of theme or region are accepted for private foundations and trusts in lieu of three-year forward planning budgets. If the relevant and appropriate type of information is not published, the donor cannot score on that indicator.

Memoranda of Understanding

As detailed above, some donors do not sign MoUs, which are usually government-to-government documents. Rather than not score DFIs on this indicator or award them an average score for it, we decided to broaden the definition of an MoU and accept documents which set out a general agreement between the donor and the recipient government about the way in which the donor will work in that country, not specific to any project or activity.

Do some DFIs or private foundations publish MoUs? Some IFIs and DFIs do have general agreements with governments that are equivalent to an MoU – some of these are published, and some are not. The IADB and WB IDA both have country strategy papers that are developed in conjunction with the recipient government and explicitly serve the purpose of an MoU. Where these equivalent documents have been published, organisations scored positively for this indicator.

What if an agency does not have a relationship with recipient governments? Some donors are explicit that they do not have MoUs at all because they do not have a presence at an intermediate level above that of the activity. For example, private foundations and trusts (Gates, Hewlett) operate only at the grantee level and usually these are CSOs rather than governments. EC-ECHO provides humanitarian aid which has its own distinct profile. In their case we have accepted general Partnership Agreements which set out the way that ECHO works with its partners. The UK’s CDC does not operate with governments, but it does sign agreements with fund managers that stipulate various conditions that the fund manager must adhere to, for example on reporting, investment code, sector or geographic restrictions. In both these cases, we would accept publication of these agreements.

If an agency is a wholly subsidiary agent would we accept the MoU of the principal donor? Some donors are subsidiary, or ‘wholly owned’ agencies of another donor. In these cases, we would accept an MoU type-document published by the principal donor that i) specifically applied to the subsidiary and ii) was not superseded by a more immediately relevant document, closer to the subsidiary level.

  • The EIB is a wholly owned agency of the EU. If the EU published MoUs which applied to the EIB, then we would accept those. However, the EIB also has Framework Agreements with recipient governments, which are more immediately relevant to the EIB and therefore take precedence. Neither of these documents are currently published.
  • The EBRD is not a wholly owned subsidiary of the EU. Their shareholders include EU Member States and non-EU states (notably Canada and the U.S.). The EU’s MoU would therefore not be expected to cover the EBRD. They do sign Framework Agreements, but these are not currently published.
  • The development bank KfW is owned by the German government. We would therefore accept an MoU-type agreement published by BMZ (as the relevant ministry) as long as KfW does not also produce its own MoUs. Neither BMZ nor KfW publish MoUs.

In conclusion, we accept that there is a wide variety in the ways that donors operate. We have taken appropriate measures to allow for the differences between donors and to accept appropriate documents that serve similar purposes to those set out in the indicator. We conclude that it is not unreasonable to score all donors equally on whether or not they publish MoUs or equivalent documents.

Future developments to the Index methodology

We recognise that the Index methodology is not perfect; it has been designed in response to the findings of both the 2010 Assessment and the 2011 Pilot Index: that donors are not publishing enough information about their aid activities. A methodology to measure the transparency of different organisations’ aid has had to take this significant constraint into account. This has meant that, thus far, we have focused more on the availability of aid information, rather than on accessibility or comparability. Nonetheless, accessibility and comparability are of the utmost importance if aid is to be truly transparent in a useful and meaningful way.

Simplifying the methodology, making it more robust

As the Index has evolved, it has become more complex in order to reflect the practices of diverse organisations. We need to reassess the purpose of the Index: that is, to provide an indication of the levels of aid transparency and show progress over time. Our research is undertaken precisely in order to encourage improvements in the amount of information made available – and ultimately, to encourage publication to the common publishing standard – IATI. It is worth considering whether a simpler, leaner methodology could achieve these goals, as well as how best to present the findings. We will consider whether 43 indicators, assessed using a manual data collection process, are still needed, or whether a smaller sub-set of these indicators would suffice to show differences in donor publication, the quality of that data, and ultimately to encourage publication of more and better data.

The goal of aid transparency is high quality publication through IATI. This is because IATI provides a structure for comparable data that can be easily accessed. It would therefore be logical to assess levels of aid transparency solely by examining the quantity and quality of information published in the various fields of the IATI Registry. Our index indicators have already been selected to reflect IATI fields. Given that IATI data can be programmatically measured – largely automatically, using a series of machine tests – this would make the index quicker and easier to produce. It would also more fairly reflect the full range of agencies’ activities, instead of being based on purposive samples. However, if the Index had been produced in this way in 2012, very many agencies would have scored zero, as they do not currently publish to the IATI Registry. Our aim is to assess overall aid transparency. In the future we hope to start measuring the quality and utility of published aid data better by focusing much more on that which is published to IATI. So our next step is to envisage what assessment methodology would best facilitate this.

The answer could be an Index with two main data sources – first, a simpler, leaner survey that measures performance by organisations not publishing to IATI; and second, an IATI data quality tool, which measures – specifically and in detail – the quality of IATI data publication across the full range of fields in the standard, for each activity. This would need to be designed carefully to ensure that the tests are meaningful, suitably targeted and appropriate to the context in which the organisation is operating. It should allow us to begin to answer questions such as: “What percentage of activities contain titles?”, “What amount of aid is this organisation publishing to IATI, and is that roughly what you would expect from this organisation (i.e. is the data comprehensive)?” or “Are project documents published in the official language of the relevant partner country?” From answers to these questions, it would be possible to build up a detailed picture of the quality of each donor’s data.

We will be considering how best to develop the methodology for the 2013 Index over the coming months and would very much welcome feedback on it: info@publishwhatyoufund.org