Kieran Walshe, professor of health policy and management at Alliance Manchester Business School, the University of Manchester, explains why the pace of healthcare policy innovation is causing academics headaches
I’ve spent most of my career researching, thinking and writing about health and social care policy and practice – especially about reforms and innovations and how (or whether) they work. I hope the research academics like me do contributes to the public good. But I think how we go about innovating in health and care systems is often pretty chaotic, while the opportunities to find out what works, and to learn and improve from this insight, are too frequently missed.
First, NHS leaders – along with think tanks and other national organisations with a lot of influence – are dedicated followers of fashion. Devolution and integrated care this year, hospital chains before that, GP commissioning a few years ago, foundation trusts and the purchaser-provider split a while before – innovations come and innovations go.
Consistency of purpose
But if we know one thing about management, it is that consistency of purpose and having a stable, long-term vision matter a great deal. This is hard to pin down in the organisational churn of the NHS, but it doesn’t help that it rushes into each new idea with great enthusiasm and an often misplaced sense of urgency.
Second, there’s a lot of “black box” thinking – using a label like lean or integrated care – without much understanding or consensus about what it really means. Researchers find this difficult. How are we meant to evaluate the innovation if everyone’s idea of integrated care or enhanced care in nursing homes is a bit, or a lot, different?
But the researchers are usually still there – partly because, in most cases, we only got to join the party halfway through, we spent a lot of time trying to work out what the innovation was meant to achieve, and we are still measuring that after everyone else has left the field
Third, the NHS often expects some improbable outcomes from these innovations, even where there is little plausible reason or no evidence that doing X will produce outcome Y. For example, expecting integrated care to reduce demand for hospital accident and emergency services makes about as much sense to me as thinking it might reduce the consumption of fast food, or increase the uptake of ballroom dancing lessons.
The rationale or mechanism which connects innovation to outcomes is often poorly explained or understood.
Fourth, to get the money for innovations from NHS England, the Department of Health and Social Care or whoever, organisations must promise to achieve those improbable outcomes, and so healthcare leaders (and the people doling out the money) engage in a mutual exercise of magical arithmetic.
Dubious assumptions are made about the cost improvements to be had from acute care reconfigurations, or the way community care improvements will reduce demand for acute inpatient beds and release cash savings. Often, nobody goes back to check if the sums were right. Anyway, by then the money has been spent.
Fifth, all these initiatives are project managed with an impressive array of RAG rating spreadsheets, milestones and metrics to check they stay on track. There is often a sense of “groupthink” about the people closely involved – they are understandably committed to the innovation, but are not unbiased observers.
Ask people on the ground about what’s going on and you often get a very different picture from the one you hear in the steering group reviews. What can most diplomatically be described as cognitive dissonance abounds.
Lastly, after a year or two, things move on – or rather, the people who led the innovation initially have often moved on to do something else somewhere else, usually on the back of their reputation for innovation. The innovation’s funders move on too, often stopping the money without much of an exit strategy (what, you wonder, do they think will happen to the innovation they’ve invested in over two or three years?).
But the researchers are usually still there – partly because, in most cases, we only got to join the party halfway through, we spent a lot of time trying to work out what the innovation was meant to achieve, and we are still measuring that after everyone else has left the field.
There is a serious point here about how we could create and use evidence on service innovations more effectively. Every year when the HSJ100 comes out, I am struck that there’s rarely an academic or researcher to be seen on the list [Editor’s note: the chief executives of the Nuffield Trust (No 78), the King’s Fund (80) and the Health Foundation (97) all appeared in the most recent HSJ100, as did the Health Foundation’s director of research and economics Anita Charlesworth (88). However, all had seen their influence decline when compared to the 2017 list] .
Perhaps that is what we researchers deserve? Or perhaps it says something about how the NHS management community thinks about influence and impact. Do politics, managerial fashion, and leadership diktats matter a lot more than evidence about what works?
1 Readers' comment