taylor-large1

Brett Taylor is an associate professor of pediatrics and emergency medicine, and holds a Masters in health informatics. He works as a researcher, lecturer and emergency pediatrician through Dalhousie University and the IWK Health Centre in Halifax.


There’s an old saying in emergency medicine: "Don’t just DO something, STAND there."  In other words, sometimes leaping into the fray without careful thought can result in wasted effort, or even make matters worse. Think first, then do. 

Recently at a meeting in Halifax, the deputy minister of health for Nova Scotia was asked what emergency doctors could do to help sustain the health-care system. There was a slight pause, then a remarkably pragmatic response. Don’t tell us what we should be doing, said the deputy minister. Rather, tell us what not to do.

Physicians, in other words, could best serve public policy by defining waste within the system: those procedures, tests or therapies that, though commonly applied, do not appear to alter patient outcomes.

It was, I think, a heartfelt request by an individual whose position requires him to ration health care. 

'We can no longer do more with less'

We often unfairly demonize politicians who are, in reality, simply responding to mutually incompatible demands from their electorate: that is, for both lower taxes, and better services.  Health care in the 21st century has become a zero sum game. We can no longer "do more with less"; if we choose to fund this new medication, or that new surgery, we are really asking that our system stops funding something else.

Want to provide better early childhood development, more resources to enable mental health, more nursing home beds? No problem: just let us know which surgical procedure, which cancer chemotherapy, what out of province travel benefit you want us to cut so that we can pay for that.

Smarter funding

People don’t do well with the concept of limits; look at what we are doing to our planet, the imminent shortages of resources and concepts such as peak oil. Real dollar funding faces similar limits; the inconvenient truth for health care is that we passed "peak dollars" quite some time ago. The task facing our society is therefore to ration funding to those interventions that make the most difference. 

The enabler of that decision making process is solid information about what works, and what doesn’t.

Here’s an example. In the 1960s, adults who developed low back pain were undiagnoseable.  Plain X-rays don’t reveal much about this condition, and physical exam usually reveals little.  Physicians of that era, anxious not to cause harm, usually recommended an extended period of bed rest, which was often quite effective. 

Beginning in the 1980s, however, with CT and subsequently MRI scanners, physicians could see all sorts of abnormalities in and around the spines of individuals with back pain, some of which seemed to be surgically manipulable.   Surgery rates for back pain began to climb; technology melded with medicine to address what was in some cases incapacitating illness.

The problem, however, is that three decades later there is increasing evidence that surgery for lower back pain generally doesn’t work. A publication in International Orthopedics in 2008, for example, shows that although surgery carries with it a risk of complications, it appears to offer no benefit for patients with back pain. (This is a general finding for large populations … Patients with significant pain should still see their physician to get individualized advice.)

The CT scan and MRI may have found abnormalities, and those might be fixed by surgery, but were they the principle cause of the back pain, or just incidental findings? Does the surgical intervention itself contribute to the pain in some way? None of this is clear. What is obvious is that routine use of CT or MRI and the automatic referral to an orthopedic surgeon is not necessarily money well spent.

Another example. Bronchiolitis is a wheezy illness of infants and toddlers, and a very common reason to visit the emergency department.  In the 1980s, if we wanted to know how bronchiolitis affected the amount of oxygen in a baby’s bloodstream we had to do a painful, sometimes technically difficult procedure called an arterial blood gas, in which a needle is placed into the artery at the wrist or groin and a sample of blood withdrawn. As a result we usually assessed children clinically, sending home the great majority who were reasonably happy, and whose feeding and sleep patterns weren’t too disrupted.

By the mid-1980s, however, transcutaneous oxygen saturation devices were becoming common.  These wonderful little devices show how saturated the arterial blood is with oxygen by shining a light through a finger … in real time, with no needle, no pain. Low oxygen saturation, obviously, meant higher risk. The application to bronchiolitis was obvious.

So we were all a little surprised to find that many of the kids we had been sending home, reasonably happy, feeding and sleeping well enough, had oxygen saturations that, according to the logic of the day, implied an unstable situation. These children "needed" oxygen and observation, since they were so clearly "at risk."

The result was a boom in admissions and a lengthening of the stay in emergency department for bronchiolitis. No one wanted to risk the health of a child; the logic behind the oxygen saturation curve seemed unassailable. A new paradigm was born in which clinical evaluation, though still important, was trumped by low oxygen saturation measurements.

Recently, however, studies have indicated that the majority of children with bronchiolitis have low oxygen saturations at some point during their illness, and that these mild desaturations don’t seem to signal harm to the child. In fact it seems that when measured the risk to these children with mild or moderate drops in oxygen saturation seems no greater than those who have high sats. The tide is turning, and clinical assessment is again recognized as the most valuable parameter in deciding care.

'We are still spending a lot of your money filling inpatient beds with babies who probably didn't need to be in hospital'

Yet it hasn’t completely turned. We are still spending a lot of your money filling inpatient beds with babies who probably didn’t need to be in hospital, and giving wheezy children extraordinarily long emergency visits while we debate the relative value of clinical assessment versus minimally low oxygen saturations. "Standing there" is hard when your training, and to some extent parental expectation, asks you to "do something".

The same sort of argument can be made about mammograms in women between 40 and 50 years of age (currently recommended in Nova Scotia despite a paucity of evidence that it makes a difference in lives saved), some joint replacement surgery, antibiotics for the vast majority of ear infections, and many other examples. Some of these may have evidence of benefit in carefully selected individuals but haven’t proved to be of use for the general population, and can probably be considered to be over utilized. In each case, the money we spend on these interventions that probably do little, and maybe even cause harm, is cash we don’t have to spend on other, perhaps more fruitful endeavours. 

Estimating potential savings no easy task

How much money we could save our system by simply applying evidence informed practice is hard to say;  true health care costs are incredibly difficult to calculate. One paper from Australia reported savings for rheumatoid arthritis care of $7,000 (Australian dollars) per year of disability using evidence based practice, compared with a total cost of $19,000 per year using routine care.  That’s a 37 per cent reduction in costs. I doubt that all such interventions will have such a high ROI, but the potential is obvious.

Don’t get me wrong; I am not trying to criticize medicine with these examples. The most important characteristic that differentiates medicine from alternative health care is its willingness to look for, find, and correct errors in its own management. All care provided in medicine is seen by physicians as a theory; research in medicine strives to disprove these theories and replace them with better ones. That’s how the remarkable improvements in health care evident over the last century have come about.

Each of the examples of unnecessary spending above was mediated by a patient whose only intent was to seek the health care he was due, and a practitioner who, sitting across from the person she was advocating for, made the best decisions she could with the information available. I don’t have the space here to give a fair balanced report of all the times that interaction has produced positive, even miraculous care.

The point I am making, though, is that it pays, particularly in our current funding environment, to spend the time and effort to test new interventions before deciding whether to implement and fund them. It’s even more important to challenge standard practice, to see if the care we routinely offer actually accomplishes what we want. Clearly, what seems right in medicine is not always correct; following logic or history or gut feeling is sometimes necessary in the middle of the night in a one on one patient care decision, but is a lousy way to run the larger health care system. We shouldn’t be listening to arguments; we should be developing and inspecting the evidence.

'Unfettered research is always interesting and useful; targeted research can be seen as a compelling social investment.'

This is sometimes a difficult thing to explain to politicians because, frankly, it costs to support academics to do that research. Try explaining the need to hire another egghead to a deputy health minister whose daily workload includes deciding not to fund certain chemotherapy agents because of lack of cost. Not an easy sell.

The deputy minister, it is quite clear, isn’t asking me. But if he were, I would recommend funding physicians in ways that encourage the practice of evidence based care.  In the case of tertiary care hospitals and universities, I would tie academic funding to intervention outcome research, demanding that public funds be spent to define best practices, and to decrease overall health care costs. This makes economic sense; the Australian study above would be easily paid for if the findings were applied to only a handful of patients. Unfettered research is always interesting and useful; targeted research can be seen as a compelling social investment. 

Academic centres should also be challenged to implement the results of good research.  Very often front line health-care workers just don’t have the opportunity or time to keep up with the literature. Academic centres should provide "Knowledge Translation" officers, whose primary task is to audit care in their area, broadcast information about best practices and help change clinical practice for the better. In my area, for example, this might result in exporting those patient care plans that have evidence support to the entire province from a single centre which has been specifically resourced for just this purpose. This is an academic role that is separate from but linked to that of researcher, an interface between front line clinicians and the new knowledge that might help them.

In any case the deputy minister’s question is a valid one. Over the next couple of decades, our health-care system will either find its way to efficient sustainable care, or it will founder financially. The Canadian ideal of needs-based services at a high level of competency is frankly at risk. Continuing to fund care-as-we-know-it is politically easy, but seems doomed to fail in the long term.  Discovering "what not to do" may be our most important health care objective.