Effective healthcare can be thought of as delivering the most beneficial treatment(s) at the minimum dosage required to produce a positive outcome that outweighs associated side effects and cost. Unfortunately, healthcare is too often wrought with overdiagnosis, overtreatment and exorbitant costs while producing a less than desirable outcome. Pain, in particular, is an overwhelming burden on both individuals and society at large with an estimated economic cost of 560-635 billion dollars a year. This is in part due to an abundance of diagnostic approaches that fail to identify meaningful pathology, leading to numerous treatments that fall short of delivering a meaningful outcome. Unfortunately, many of the treatments designed to address people’s pain have been well studied and found to lack a meaningful benefit, but nevertheless, these interventions continue to be delivered, often by passionate purveyors eager to fill a desperate need.
When many of the treatments offered to patients fail to justify their continued use when adequately controlled and studied, why do these ineffective treatments persist in clinical practice? Many of the contributing issues are by no means unique to physical therapy and more broadly are inherent to human nature and reasoning. Nevertheless, I am a physical therapist and as such these issues will be viewed through a physical therapy lens.
Of primary concern is the difficulty of teasing out what treatments work, why they work and for whom do they work? Clinicians can often be deceived as to what treatments actually work when they observe a positive outcome in the clinic. The complexity of a clinical environment precludes the accurate identification of a specific treatments efficacy or effectiveness. Steve Hartman highlights several issues that confound our clinical observations including the natural history of the disease, regression to the mean, placebo, post hoc reasoning and confirmation bias. These are all potentially relevant factors that can give an illusion that an otherwise ineffective treatment brought about a meaningful change. Due to these confounding factors, clinicians often conflate a positive outcome with treatment effectiveness. It is important to realize that outcomes measure outcomes, not treatment effectiveness — they are separate constructs. The totality of care might have produced a meaningful outcome, but it is too difficult to deduce how any single aspect of care did or did not contribute to the observed success.
Undoubtedly, it can be a powerful thing to work with a patient who has had significant struggles with some sort of pain or disability see significant improvement. Anecdotal success in healthcare is in abundance and provide a compelling narrative of which people crave. There is no shortage of clinicians and patients touting the success of treatments from the semi-plausible to the absurd. With much of medical care being advertised directly to consumers, these stories provide patients with compelling reasons to seek out specific treatments directly, regardless of their effectiveness or lack thereof. Most physical therapists are well-intentioned and invested in the success of their patients. This can often produce a pressure to do something, especially if specifically requested, whether it be a desire to help, to foster patient satisfaction or maintain some form of therapeutic alliance. Often times this is done without regard for the actual benefit and gets a pass due to the small risks and cost.
Physical therapy is a largely conservative care oriented profession with interventions of minimal cost and risk. Most of the time, this is a good thing. However, when it comes to minimizing or de-implementing low-value care, there are minimal negative incentives to dissuade the use of ineffective interventions. Contrast this to certain areas of medicine where failure to deliver effective interventions can come with severe morbidity and mortality, often at a high financial cost. This lack of significant consequence allows for continued rationalization of ineffective treatments in physical therapy. It is much easier to justify the continued use of something like ultrasound under the guise of therapeutic alliance, dubious neurophysiological effects, managing expectations or any other non-specific reason than it is to rationalize giving antibiotics to a child sick with a viral illness.
We cannot, however, think of ineffective conservative treatment in musculoskeletal medicine as harmless. Ineffective care does not exist in a vacuum and can potentially lead to an escalation in care to more costly and invasive interventions when a patient’s primary complaint is not adequately resolved. There also exists an opportunity cost in a system with finite resources — patients are increasingly limited in their access to physical therapy often due to significant out of pocket cost or limitations in coverage. As the burden of musculoskeletal pain and disability continues to grow despite the proliferation of treatment options, ineffective treatment, however safe or low cost, should be minimized.
As it stands, high-quality evidence is likely the best tool we have in teasing out whether a treatment works or not. Physical therapy as a scientific profession is relatively young and as a result, much of our evidence base, while improving, comes with profound limitations. Many of the issues highlighted in John Ioannidis’ famous article “Most Published Research Findings Are False” are unfortunately prevalent in the physical therapy literature. The overall quality of evidence is low for many reasons including inadequate funding to design and execute robust clinical trials, failure or inability to adequately blind clinicians, inappropriate statistical analyses, fatal flaws in trial design, treatment fidelity issues, lack of prospective registration, and lack of long-term follow-up. This was highlighted by Leo Costa when he found that as of December 2017 there were 30,000 physiotherapy RCTs indexed by PEDro, of which only 18% could be classified as high quality (Pedro >/= 7/10). Overall, this likely skews results towards false positives and inflated effect sizes which then produces over-confidence in the tenuous results of low-quality trials. This makes it easy for any clinician to cherry pick a handful of trials showing a positive effect, irrespective of the actual quality of the trial. As it stands, there are only a few conservative MSK interventions that survive scientific scrutiny.
The issues facing evidence-based practice in physical therapy can easily mislead clinicians who often do not have the time, skills or resources to appropriately appraise and translate research results into practice. Access to research is increasingly difficult, productivity demands in healthcare continue to rise with minimal incentive to dedicate personal time to professional growth. Even those clinicians who do take the time to sift through research are faced with an overabundance of published research, often riddled with obfuscated language and spin, who unfortunately may not have the requisite ability to decipher and interpret the trial results. Unfortunately, even with these skills, clinicians may then find that much of the data in trials is fatally flawed by trial design, confounders or bias, among other things. As a result, the profession often demonstrates poor adherence to clinical guidelines and shows a limited understanding of specific aspects of research and evidence-based practice. These issues can generate significant mistrust and misunderstanding of evidence-based practice. With this mistrust comes sentiments that research may not necessarily apply to clinical practice, is unreliable or of limited use. Anecdotally, this is especially evident when published trials contradict observed clinical success.
This mistrust and misunderstanding of evidence-based practice can lead clinicians to find more palatable answers in their pursuit of helping patients. This often comes in the form of passionate gurus who come with significant financial and intellectual conflicts of interest. There is no shortage of charismatic educators touting anecdotes of patients who were largely failed by healthcare until they found them and received their particular treatment, which you can learn how to deliver in a series courses at a few hundred dollars a pop. These are exciting stories to clinicians, who often lack answers for the patients that need them the most. As clinicians continue to invest significant time and money into their education, along the way comes a pressure to get a return on investment. This sunk cost makes it difficult to accept trial results suggesting the intervention they invested in might not actually be all that effective and significantly hampers de-implementation.
So what can be done? A lot, and certainly more than what can be encapsulated here. For starters, the profession can do a better job of defining when and how new therapies should enter routine clinical practice. Scientific literacy should be improved in students and clinicians to allow them to be better consumers of literature. Access, synthesization, and translation of research findings need to be restructured. Increased collaboration between clinicians, academics, and patients needs to be fostered. Professional communication needs to be improved. Critical thinking skills should be developed to improve reasoning and clinical decision making. A culture of skepticism and doubt should be embraced when sensational claims are made about novel treatments. Scientific argument and debate need to be leveraged. Knowledge translation and behavior change needs to be a point of emphasis. When a 2007 survey shows that 83.6% of orthopaedic certified specialists who responded were likely to use ultrasound in practice, clearly there is work to be done.