Blog


Beyond BigData: Innovation in Development Evaluation

by

Collecting data becomes more and more important to development aid. With the spread of digital information tools, the amount of information available within the development context has increased dramatically. Not only has the processing of traditional data – financial accounting, Inventory, stock of drugs, etc. – become much easier and cheaper. But also new and big data has emerged: Metadata on mobile phone or internet usage, banking transactions and many more. The value of this data has been recognized by all actors in development aid: Organizations are eager to present data in order to prove their effectiveness and efficiency to donors and to the public whilst  donors are analyzing this data to administer their aid portfolios. Nowadays, data transparency is key to success in in the development sector.

However, the new addiction to data has led to an increased demand in measurability also within areas that were traditionally considered as barely measurable.
Particularly development aid organizations are widely engaged in activities that are hard to capture in numbers:

Supporting local authorities in developing accountable, effective and efficient workflows, for example. Or advising communities in conflict prevention. Clearly, it is possible to measure inputs: Number of workshops conducted, or hours of consultant assignments registered. But already when measuring inputs, we realize that the quality of these inputs is hard to assess – and even harder to capture in numbers.

And the problem becomes even bigger when looking at the output or outcome of such efforts. If a development agency conducts a workshop on conflict resolution – what is the effect of this workshop? Can you possibly measure an improved conscience for conflicts, or a stronger sensitivity towards disputes among communities? We might just skip the outcomes and outputs and jump to impact measurement, but here, we have so many distorting factors that any measurement is heavily biased by default.

Monitoring and evaluation departments within aid institutions and donors so far have somewhat uncomfortably, but generously overlooked the issue. They have mutually agreed that any attempt to measure success in this context is futile and therefore, if numbers are required anyway, they resort to inputs, or clumsy and implausible quasi-qualitative figures: ‘The result of the project was that the ministry drafted five new strategies to reduce poverty.’ Needless to say, nobody can interpret such a number and understand whether the project was a complete failure or an immense success.

But whilst measuring the success of technical services in the development context is difficult, we should still try – and perhaps think of innovative ways to do so.


There are basically two ways we can measure outputs and outcomes of these kind of activities: Either we ask independent participants about their personal opinion or we try to observe the outcomes directly. While there are many approaches to asking participants (via surveys, focus groups, interviews), these are often time-consuming and expensive in order to produce reliable information. Observing outputs and outcomes would be favorable, if we would only manage to identify the correct objects of the observation – and the correct observers.

This means we have to be creative in defining the objects of observation
, i.e. the results indicators for UNDPs technical support activities. What is crucial in all attempts to measure outcome of a specific project, is a focus on the action of the recipient of the support – i.e. the agent.

a) Text analysis: How often and in what context were specific keywords used in official texts (strategies, legislative bills, press statements, social media communication)? In this case it is important to measure documents that are not in the sole responsibility of the recipient of technical support, but in documents where we know the recipient had to effectively negotiate the content. In this case we can measure how successful a specific topic was included in the work of the recipient and whether he became an agent of change himself. There are some excellent online tools to do text analysis, some even free of charge: here’s an overview.

b) Environment analysis: Particularly for topics that relate to awareness raising among public audiences, it might make sense to measure the agent’s output. How many posters on FGM (female genital mutilation) are hanging in how many schools, hospitals or other public institutions? How many advertisements on personal hygiene appear in newspapers or on billboards? This measure can capture not only intentions, but also (financial) commitment of the agent.

c) Investment analysis: The point raised in b) can be even more broadly defined as observing what financial commitments the agent has taken to implement a specific change. For example, if vocational training is being promoted, we could measure the agent’s investments in tools, books or teachers. Important is a focus on ownership and intrinsic commitment. UNDP support is not just a contract that must be fulfilled by the agent – the aim is to enable agents and use the international support as a lever for national engagement.

d) Social network analysis: If we think of the case of civil society engagement, it might also prove valuable to look at the agent’s personal connections. With whom can and does he or she interact? What contacts on what levels does he or she have? Does the agent have the cellphone number of a minister or only the landline of an office manager? Who is following him on Twitter and who are his friends on Facebook? What’s his Klout score? Is she on the board of important civil society organizations, such as NGOs or Trade Unions? – it is a way of analysis that has become extremely interesting for businesses, so aid actors should catch up!

However, as mentioned above, it is not only necessary to take a close look at the objects of analysis, but also at the observers. For, currently, many of the technical support services are in fact evaluated by the commissioners themselves. And while this might very well make sense in some cases (for example when the technical support service is only a fraction of an overall project the commissioner takes responsibility for), it is far more debatable if the project exclusively consists of technical support work. Because here, the project lead would assess his or her success entirely himself or herself. In this case, at least an internal audit, or M&E division should lead the observation efforts for evaluation purposes.

Now, there is widespread skepticism towards the abovementioned new addiction to data and measurability. To many it seems as if the guys from the evaluation desk were given a brand new hammer and now all their problems start looking like nails. But in fact, if you look at it closely, the biggest innovation may not lie in data capturing and processing – but in our new ability to make the data available to a broader audience by presenting the results of the analysis in appealing ways. Collecting more and better data is only a groundwork for creating word clouds, heat maps, network graphics and much more. These are the powerful new tools that help you to illustrate your work, your challenges and your success. They make it possible to tap on new sources of support for successful projects, be it among partners, donors or the public. So next time you are annoyed with all that data you should be collecting, think about how it might finally be useful to you.

UNDP Around the world

You are at UNDP Sudan 
Go to UNDP Global