Health Policy Insight, 3 January 2012
Health economist Professor Alan Maynard has spent much of his career highlighting the differences in clinical practice. In his first Health Policy Insight column of 2012, Professor Maynard examines the history of the subject and asks whether the reduction of variation is in fact a mirage – and if so, then is ‘The Nicholson Challenge’ of £4 billion annual productivity savings actually a euphemism for cuts? He concludes that if QIPP is no mirage, then hospitals must cut beds and dismiss staff.
Productivity variations are universal.
Furthermore, all markets (and a market is simply a network of buyers and sellers) have variations in price and quality. When you are searching for your new washing machine, computer or underpants, you are confronted by price and quality variations.
......................................................................
......................................................................
Thus for electronics, you get the Amazon price and then see if there are any reputable dealers doing better. For underpants the middle classes benchmark on Marks and Spencer.
These retail variations are similar to the variations in industrial productivity. In EU and North American markets, it is commonplace to find one hundred percent variations in the cost of production of similar products. Compare such nations with India and China, and the variations are larger. It is these variations which offer profit opportunities for the more efficient producers of goods and services.
Variations in healthcare
Is the production of healthcare different? The literature on clinical practice variations is ancient! Glover wrote about variations in tonsilectectomy rates in 1938, and, in addition to quantifying the variation in surgical rates, noted that their most likely cause was differences in clinical opinion.
Bloor and his colleagues wrote about the same topic in 1974, and having studied the Scottish data and interviewed practitioners, came to the same conclusions: large variations in clinical practice existed and were explained by medical opinion varying.
In 1976, the British economy was in crisis and there was a vigorous squeeze on public expenditure. The Secretary of State, Barbara Castle, gave the NHS a zero increase in funding growth, and published a document entitled ‘Priorities in Health and Personal Social Services In England’. This document noted variations in expenditure and activity and highlighted the “efficiencies” of £40 million that could be garnered if the bottom 75% of the distribution emulated the best quartile or 25% in terms of average length of stay.
Atlases of variation
Around the same time, Jack Wennberg of Dartmouth College Medical School broke into print in the USA and showed large variations in processes and outcomes in the US Medicare programme. Starting initially with analyses of the North East USA including Boston and New Haven, Wennberg and his Dartmouth colleagues have now produced an ‘atlas’ of variations which maps difference in activity and outcomes for the whole country.
Wennberg has been very influential in some US policymaking circles. For instance, some hope that the increased funding of the Obama healthcare reforms will be generated by squeezing greater productivity out of US providers as they reduce clinical practice variations.
In the 21st century, (another!) Bloor analysed the activity rates of NHS consultants. She produced data for individual specialties in all English NHS hospitals. This was distributed by the Department of Health, and showed major variations in activity, with evidence of some clinicians consuming on-the-job leisure rather than caring for patients!
There is little evidence that such material collected since 1996 at great public expense has been translated in changes in clinical governance and practice. Avoidable waste seems to be institutionalised in management incapable of using data to inform productivity improvements!
Clinical practice variations: real or theoretical savings?
Considering all this, two questions spring to mind:
1) Why has it taken so long to exploit the potential of reduced clinical practice variations? A common American saying is that “what is regular, ain’t stupid”! Are the variations described over 70 years real opportunities for productivity gains or are they mirages, seducing policymakers into evidence-free optimism?
2) If they are a mirage, is the Quality, Innovation, Productivity and Prevention (QIPP) merely a cost-cutting rather than efficiency-enhancing activity?
Whitehall Village is permeated with optimism about the potential of clinical practice variations to solve the NHS funding crisis. The gospel according to McKinsey’s produced a diatribe extolling the virtues of reducing variations; all Powerpoint and no critical content.
McKinseys proposed that £20 billion of savings would manifest themselves like manna from heaven, and no attempt was made to answer a big question: in practical terms, how do you get from where we are now to this Utopian goal?
These McNotions have been adopted uncritically by the Government and now manifest themselves as what Conservative health select committee chair and erstwhile health secretary Stephen Dorrell dubbed “The Nicholson Challenge”: can the Chief Executive of the NHS, David Nicholson translate rhetoric into productivity improvements?
The variation sceptics
Despite the evangelical belief in clinical practice variations as epitomised by Nicholson and his team, and folk like the ubiquitous Muir Gray, not everyone believes in the gospel according to Wennberg-Dartmouth College USA. The critics of these gospels focus on some complex measurement issues associated in particular with the measurement of patient illnesses in the 306 referral areas used by the Dartmouth team.
Buz Cooper, an eminent physician and researcher, notes these areas have very disparate income characteristics with affluent metropolitan areas having within them areas of acute poverty. Cooper asserts that poverty is the major determinant of variation in activity, and that the relationship is non-linear and much skewed by the very poor with complex medical conditions. Thus Cooper argues that high levels of activity are indicators of patient need; not waste due to practice variations.
Cooper is not alone in his criticisms of the Dartmouth-Wennberg thesis of massive savings to be made if practice variations can be eradicated. Again the area is complex, as epitomised by the Bach versus Skinner et al debate in the New England Journal of Medicine in 2010. The bottom line in this debate is whether the savings of 30% of the Medicare budget postulated by the Dartmouth group or the Nicholson savings are merely theoretical; actually achievable; or a complex mix of both of the above.
Given this debate about whether variations are real measures of inefficiency or mere products of variations in local patient need, caution should be deployed in assuming that NHS productivity can be driven up easily. It is time for a sharp dose of “scepticaemia”!
Perhaps surgical variations are better indicators of inefficiency than differences in medical practice? To interrogate such issues better data and better use of existing data is essential. Does the government’s QIPP programme do this?
QIPP: define your terms!
Before dealing with this issue, it is important to define terms in relation to the NHS debate: what do the QIPP activities mean? Quality can be defined in relation to process and outcome, where hopefully adherence to the former (e.g. clinical guidelines) gives patients improved outcomes - i.e. improved length and / or quality of life.
Innovation is a word much abused by big pharma, who dress up old drugs in new colours or packages and call it ‘innovation’! Real innovation involves either the production of identical outcomes at a lower cost, or improved outcomes at the same cost. To demonstrate such innovation, good cost and outcome data are essential.
Productivity can be related to process or outcome: i.e. increased activity at a lower cost might be an increase in productivity, if outcomes are stable or improved. Again good input / cost and outcome data are essential to show increased productivity. Also, the term productivity seems identical to innovation: why should this be so?
Prevention may or may not be better than cure! The cost-effectiveness evidence base for prevention is very poor. Policies tend to be advocated by adherents to the creed of prevention even though they offer (often) poor trial data, usually lacking any consideration of costs. Productivity gains from prevention may accrue in the longer terms when we can identify cost-effective interventions to mitigate ‘big spend’ areas such as obesity and diabetes.
In evaluating QIPP, it is necessary to be able to distinguish real productivity improvements from budget cuts which ‘save’ money and have unknown effects on activity - and particularly outcomes. What can the Department of Health offer in the way of evidence of QIPP being efficient?
QIPP: the Department’s evidence
Those seeking enlightenment as to the performance of the QIPP programme can scrutinise the quarterly reports e.g. David Flory's “definitive account” for Q2 2011, published in late December 2011.
This material offers a cautiously optimistic picture of QIPP savings being achieved in the first half of the year. It shows how MRSA rates have been reduced and patient-reported outcome measurement is now identifying good and poor performers for hip and knee replacements e.g. Aintree, Barking, Barnsley, Doncaster, Heart of England, Mid Yorks, North Tees, the Royal Orthopaedic and Sandwell are NOT good places for treatment according to provisional PROMs data!
The quality data (Q of QIPP) are very limited. The data on innovation (I) is even thinner, being a list of actions expected with little attempt to prioritise them and cite chapter and verse as to their relative cost effectiveness. The review of productivity (P) is dominated by finance and global figures about savings, with no evidence that these economies were achieved without affecting service quality and quantity. Sickness rates continue to decline. The report also offers workforce data where changes are small, with cuts in the health visitors stock of 2.2% / 174 being the most noticeable outlier. The prevention (P) element of QIPP focuses on the declining number of health visitors and prioritising the reversal of this trend, together with developing breast feeding and screening programmes.
This is a fascinating collection of data, which hopefully will be widely read after being buried in the pre-Christmas festivities! However, as a measure of the success of QIPP, it is sadly wanting.
The standard NHS response to austerity is to close beds and cut staff. Currently these responses are constrained by the Coalition Government’s renewed interest in waiting times and targets of the Coalition. There is little detail about how PCTs and trusts are saving money, but presumably this could be illuminated by careful case studies as well as improved quantitative analysis.
Do improvements in clinical quality deliver improved “bottom lines”? Where are the real data about QIPP and NHS reform? Rauh et al in the New England Journal of Medicine 29/12/11 remind us of some age-old management issues. They point out that clinical improvements can reduce variable “layer 1” costs such as supplies and drugs. Speeding patient flow through a nurse-led clinic may reduce nursing hours “layer 2” costs. Reducing “layer 3” costs such as beds, equipment, theatre time and consultant activity produces additional capacity without necessarily producing bottom-line savings. Indeed with PbR it increases the incentive to admit more patients and worsen commissioner (PCT / CCG) funding pressures.
The challenge for the hospital sector is to make these savings by clinical innovation and then pass them on to community care to facilitate quicker, efficient discharges and policies to reduce referrals; many of which are currently un-evidenced. This means hospitals must innovate and improve clinical performance so as to shrink. In so doing, they will have to cut beds and dismiss staff and convince Whitehall and patients that the quality of care remains high. Only if hospitals shrink will resources be freed up for non-hospital use.
If QIPP is working, the Department should be offering us measures of hospital redundancies and bed closures as measure of its success, Instead the bed stock appears to be as resilient as the hospital workforce. Thus QIPP appears to be failing.
Conclusions
The potential of a programme of reduction in clinical practice variations may be over-hyped due to poor measurement and the failure over decades of healthcare systems, public and private, to manage variations and improve productivity.
Managers remain reluctant to challenge clinical practice and clinicians tolerate un-evidenced variations amongst their peers. This reluctance preserves the size and number of hospitals and employment of consultants, nurses and managers - but is inefficient. As it was when Labour constrained NHS funding growth in 1976 and increased NHS funding carelessly in the last decade, so it is now with the Coalition’s austerity plans.
Attempts to improve? QIPP is ambitious, and also in its early stages. The Department offers no evidence of reductions in the clinical practice variations on which the programme is based. Instead it offers routine, limited and interesting data which offers few insights into whether QIPP is increasing productivity and / or shaving costs in ways whose effects on quality and quantity are difficult to detect.
Evaluation of QIPP needs improved use of better basic data about costs, activity and outcomes (note the paucity of primary care analysis) but sadly the Government has not prioritised this despite its rhetoric about evidence-based policy making.
In particular, the Department of Health needs to measure the success of QIPP in the hospital sector by the extent to which it produces bed closures and staff redundancies whilst maintain clinical quality. Quite a challenge!