Understanding impact: Finding what’s genuinely useful

Rosie McLeod and James Noble

There’s a fundamental problem with ‘impact measurement’ in the sector: charities can rarely actually demonstrate impact, but there’s an expectation that they will. At a time when many charities are under more pressure than ever to demonstrate their relevance to funders, demonstrating impact is a top priority. 

But let’s start with why it’s so rare. There are two big challenges: impact itself takes time, so you need to stay in touch with people to measure it; and impact is caused by many things, so you need a way to determine your own contribution. Addressing these requires expertise and resources rarely available to the sector.  

The Justice Data Labsuggested by NPC and run by the Ministry of Justice since 2013, is unique in providing a solution to this problem. It lets charities that have worked with people who have offended analyse aggregate reconviction rates of their service users compared to a statistically derived control group without compromising anyone’s privacy. It is the first service of its kind in the world and may represent the most realistic way for impact measurement to become routine and for us to learn ‘what works’.  

For this reason, we continue to argue that wherever there is administrative data on long-term impact, the Justice Data Lab should be the default model for helping organisations access this data. This has begun to happen, with a cousin in the form of the UCAS Strobe service, and similar services being piloted by Department for Work and Pensions and Department of Education (via the Education Endowment Foundation). There’s also interest from abroad; especiallin the US, Canada, and Australia, and in March 2021 we’ll be contributing to a session on the Justice Data Lab with the Ministry of Justice at the 14th United Nations Congress on Crime Prevention and Criminal Justice. But it’s been undeniably challenging to get these initiatives going in other areas of government, and the rate of use among charities is still relatively low, so it’s not an unmitigated success.  

In some ways, 2013 feels like a very long time ago. The digital shift within the sector has accelerated through the pandemic, with most charities expecting to continue on this path. With that, the nature of charities’ own data is changing too. What opportunities might this open up for another approach to helping them address the attribution question without overburdening them? What other innovations like the Justice Data Lab are possible? 

Most organisations can’t create a data lab or similar, so a lot of effort goes into finding other routes that might take us half-way there; like measuring short-term outcomes, perceptions of impact, qualitative research and so on. But the narrative of proving impact and the expectation of getting as close as you can needs to be continually challenged with realism. Its much better to prioritise more useful research questions that help you to learn and improve. 

We don’t have the right incentives for charities to collect data that’s high quality and useful to them. Charities are expected to show a level and nature of change that is unrealistic. For example, the average effect sizes among charities using the Justice Data Lab is just 5%, or 2 percentage points, yet the pressure on charities to demonstrate big effects is unrelenting. This leads to inflating of impact in reporting and chasing after ‘proof’ of impact that’s unobtainable. 

Instead of asking charities to provide hard evidence, a better way forward would be to concentrate on data that can be used. We can make the measurement question more manageable by acknowledging that studying longer-term outcomes and impact is difficult, and that we should do it sparingly. If we accept the natural limits to the level of proof available, we can refocus on genuinely useful research questions. It should also help us to appreciate the value in all types of research, from validated surveys, benchmarking and value for money analysis, to practitioners’ observations; what Nancy Cartwright has referred to as ‘vouching’ rather than ‘clinching’ evidence.  

Progress is possible. We need to make it as easy as possible to access and use impact data in a way which doesn’t put the burden on charities, and recast the role of charity evaluation when they are doing it themselves. Getting proper impact data is hard by yourself. Unless an infrastructural solution like data labs is available, it’s better for charities to focus on answering other useful research questions about reaching the right people, feedback from users, and quality delivery that are within reach and help manage effective delivery. 

We’d love to know what you think in response to these questions, comment below with your thoughts, ideas and what else we should be asking…

Notify of
1 Comment
Oldest Most Voted
Inline Feedbacks
View all comments
Alex Shears

I am really interested in the idea of being able to compare outcomes to a statistical group, to really pin down what outcomes can be attributed to a specific intervention. Can you tell me is this something that can only be done when looking at long term outcomes, or could this approach be taken when working on a 2 year project? It would be great to have a conversation with someone about this. Thanks!