• Blog
  • Work With Me
Menu

Physiological

Street Address
City, State, Zip
Phone Number
Where Physiotherapy Gets Logical

Where physiotherapy gets logical

Physiological

  • Blog
  • Work With Me

Tearing Down the Pillars of Evidence Based Practice

August 11, 2015 Kenny Venere
pillars.png

One of the most common arguments stemming from our recent post on acupuncture, Needle in the Hay, was that we as a profession should not overly rely on “1/3rd of the pillars of evidence based practice". This argument shows nothing more than a fundamental misunderstanding of what evidence based practice is. The concepts of evidence, clinical experience/expertise and patient values do not exist independently from one another. Nor can we pick and choose which of the three best suits a particular set of beliefs. In fact, we should stop referring to evidence based practice in terms of “pillars” altogether. It is an unhelpful metaphor that only serves to perpetuate a misinterpretation of the philosophy of evidence based practice.

To be clear-- Evidence based practice is not a math equation, it is not a three legged stool, and it does not consist of pillars. You are not “2/3rds evidence based” when your experience says a treatment works and your patient values that same treatment. One is not overly focusing on evidence when systematic reviews of well conducted randomized controlled trials directly refute an individual’s clinical experience or their patient’s history with a particular intervention.

Evidence based practice is (from Sackett):

the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient. It means integrating individual clinical expertise with the best available external clinical evidence from systematic research.

Or even better (from Howick):

Evidence Based [Practice] requires clinical expertise for producing and interpreting evidence, performing clinical skills, and integrating the best research evidence with patient values and circumstances. 

Why was it not called Experience Based Practice? Because clinical experience is woefully unfit to determine if your practice is actually beneficial, harmful or ineffective. Howick writes “until at least 1860, and probably 1940, most medical interventions were no better than placebo or positively harmful.” These eras were defined by experts, gurus and charlatans basing practice solely off their experience, observation and beliefs. I imagine the physicians treating George Washington had success with bloodletting in other patients previously, but it was this misleading clinical observation of benefit that led to four liters of blood being drained from the president which likely contributed his death shortly thereafter.

How do things like this happen? How can a treatment we often see a “benefit” from actually be ineffective or harmful? Because clinical experience is muddled with confounding variables and humans are easily deceived by things such as placebo, natural history, regression to the mean and other biases.

As I have written previously, clinical experience and expertise is an excellent hypothesis generator (among other things). This means that the observations of an experienced clinician can produce theories and novel interventions that can later be tested under more controlled conditions. This is an incredible valuable benefit of clinical experience. However, it is only under controlled conditions that the true effect (or lack thereof) of these novel treatments can be determined with any sort of certainty.

And what of patient choice, values and circumstances? Well, every patient is unique. This is why you use your clinical expertise to integrate the best available evidence to optimally meet your patient’s needs at a particular time. Again, it is the best available evidence that you base your practice on. Sometimes all that might be available is a lowly case study and other times your practice may be based on numerous high quality systematic reviews and meta-analyses. Most of the time it’s somewhere in the middle. What matters is that you are using the best available evidence. Not the evidence that best suits your biases and beliefs.

However, we should not misuse patient choice as justification for a treatment that evidence suggests is rubbish. It is important to consider how the patient came to choose this particular treatment in the first place and what else might have contributed to the patient experiencing “success” with this treatment. On this particular issue, Neil O’Connell says it best:

Why do patients choose acupuncture or manipulation or ultrasound etc? I guess they do because each treatment has passionate advocates who promote them, advertise them, spread the word about them, often with a willing media tagging along like an enthusiastic labrador. So the treatments help because patients expect them to, but patients only expect them to because the culture that delivers those treatments has propagated that belief! It’s a fabulous business model (it all costs) but I smell a conflict of interest. Patient choice is difficult in a world where good information is so elusive.

As it stands, evidence from high quality trials is unambiguously the best way to know if a treatment has any sort of efficacy or effectiveness. This is not up for debate. Yes, there are problems with evidence (see recent pieces by Greenlagh, Goldacre, and Ionaddis) and these must be recognized and corrected. However, there are far greater issues with basing practice on experience and patient values. That is not to say we should ignore these things, but rather put them in their rightful place within the philosophy of evidence based practice. This means using our clinical expertise and experience to interpret and integrate best available evidence on an individual, patient to patient basis. The idea of evidence based practice as consisting of three pillars that can be separated only serves to muddy the waters and should be discarded.

Photo courtesy of flickr user giftedstudieswku

In Clinician, EBM, EBP, Experience, Research
← Poking Holes in the Evidence for AcupunctureNeedle in the Hay - A Reply to Dunning et al →