The argument for standardisation of data for outcomes/value is fairly well accepted. Standardised measures allow us to aggregate and compare data to build knowledge. In healthcare, established measures like Qualitative Life Years are predominant and underpin where resources go. But in our sector, an unintended consequence of the desire for aggregated data can be a mismatch between what data charities want to use, what funders want, and what matters to users.
The expectation that an outcome standard will always be appropriate can lead to conflict if it is imposed upon a sector and population group but doesn’t fit the context. We find these top-down attempts tend not to work.
This isn’t to say standard value or outcome measures aren’t achievable or useful. Rather, it is to say we don’t always develop them in the right way. Where measures are co-developed by people experiencing an issue and those delivering a programme or funding work, they can be relevant and reflective of people’s own experience and the meaning they ascribe to it.
This converging of ‘top-down’ and ‘bottom-up’ perspectives on the right measure takes work, and isn’t perfect – iteration over time reduces comparability, for example – but may offer the best way through debates which show no sign of ending.
We are now asking:
- Where are standard measures most and least accepted?
- How does the acceptance of standard measures vary across the sector, and what can we learn from sectors where measures are successfully co-developed and used?
We’d love to know what you think in response to these questions, comment below with your thoughts, ideas and what else we should be asking…