Who would have thought that producing the only independent measure of aid transparency among the world’s major development agencies could be so hard! On the face of it, the production of the Aid Transparency Index is no mean feat. At the most basic level we’re trying to compare 47 agencies using a combination of software and human effort. Across the 35 indicators we use, we need to manually review upward of 10,000 samples to make sure that the data and documents published by these agencies are not only what they say they are, but are also of sufficient quality. And over the six-month period when we undertake this work, we do this twice so that agencies have a chance to improve their publication between the first and second round. So yes, that means a total of 20,000+ manual samples need to be done. Once you consider that the samples can be in any language, and that every agency uses different formats for its tenders, contracts, evaluations, and other documents, the complexity and volume of work can appear overwhelming.
In order to get through the volume of sampling, and frankly, to prevent those working on the Index from going mad, we divide the work up between the wider Publish What You Fund team. So, imagine my distress when my heroic, but wholly disingenuous offer of help to the team, is taken at face value and they give me a bunch of agencies to review. Here’s my story:
Looking at so much aid data can actually be rather interesting. I’ve frequently found myself impressed by the quantity and quality of data which some agencies share. It’s easy to get distracted by an impressive project and on more than one occasion I’ve found myself reading through results documents, struck by the scale and impact of some of the world’s most important aid interventions. And at times, normally when sugar levels are high, I can’t help but be proud of the level of rigour that we apply to the Index.
But it can be tough. Even something as simple as verifying the location of a project can mean having to read multiple project documents, and cross referencing with Google Earth to make sure that Gicumbi District really is in Rwanda! Then consider that for each indicator we look at 20 samples and you start to realise how time consuming this can be. Meanwhile I can’t resist glancing at the progress bar at the top of the screen on our Index software which seems to move at a torturously slow pace over the course of the weeks while we’re doing the sampling.
I can’t complain really. The Aid Transparency Index has been a major driver of increased aid transparency across the world’s major aid agencies and development finance institutions, so no matter how difficult it is we need to keep up the pressure. At the same time, the truth is that the team running the Index have a tougher job than those of us just helping with sampling. Firstly, they have to train us to ensure that standards are maintained, and for some of us the phrase old dogs and new tricks comes to mind. But they’re also the ones liaising with donors via phone and email, coordinating more than 40 independent peer reviewers, and making sure the amazing software that underpins the Index process stays afloat. Not only that, but they are checking our work and always there to provide advice or help us with particularly challenging samples.
Next on the list is the process of calculating and verifying the results using the Index scoring methodology. This will give us the final scores for each donor which we’ll use to produce the Index, sorting and ranking the donors’ levels of transparency from “very good” to “very poor”. This is the exciting moment when we see who has topped the rankings this time, who has made significant gains since 2018 and where we have found backsliding. We are currently planning to launch the results and final report at a virtual launch event in late June, however, in the midst of the coronavirus pandemic we may need to delay until July – watch this space for updates on the final release date.
In conclusion, I always knew the Aid Transparency Index took a huge effort to produce, and I’m proud of the lengths we go to make sure that it’s a rigorous and high-quality assessment. But like with most things, it always seemed easier when others were doing it. :