Smith and Pell’s “Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials” has been shared enough times to bolster an argument against evidence based practice that I am compelled to write a blog post about it. For those unfamiliar, Smith and Pell wrote up a satirical randomized controlled trial where one group jumps from a plane with parachutes and the other group uses a placebo parachute. You can guess what the results might be. The issue with the article is that while comical, it is not actually an effective criticism of evidence based practice and its preference for randomized controlled trials over other forms of evidence (such as observational studies).
Smith and Pell’s criticism falls short because when it comes to the efficacy of interventions like parachutes, the effect of the intervention is so blatantly obvious that there is no need to conduct a randomized controlled trial. This means you end up having a truly fantastic intervention without any robust evidence to support it. Jeremy Howick calls this the “paradox of effectiveness”, where some of the most effective interventions available do not actually have controlled clinical trials that illustrate their benefits.
Howick provides other examples of the “paradox of effectiveness” such as emergency appendectomies, the heimlich maneuver, defibrillator use and anesthesia. These are interventions that are remarkably plausible, fill an acute and usually life threatening need and produce dramatic and routinely observable effects. The results of these interventions are so large that it effectively rules out any potential confounding variables. You do not survive jumping from a plane with a parachute because you expect it to work, appendectomies do not prevent complications from a ruptured appendix because of selection bias, anesthesia does not produce unconsciousness due to issues with blinding and the heimlich maneuver does not resolve choking due to regression to the mean. Therefore, conducting a double blind randomized controlled trial to demonstrate the worth of parachutes would be a waste of resources. Nor would doing so be necessary to adhere to the philosophy of evidence based practice. Where it would be helpful to perform a controlled trial is if you were comparing different parachute types, open vs. laparoscopic appendectomies, different defibrillator settings/pad placement, or comparing certain drugs used in anesthesia. These differences would be far more subtle and subsequently the confidence that the difference is not caused by confounding variables much smaller.
Unfortunately, interventions that produce dramatic, observable and repeatable effects are few and far between. Most interventions produce smaller and more inconsistent outcomes which necessitates their study under more controlled conditions to rule out competing hypotheses. These modest effects are more characteristic of the interventions available to physical therapists. Therefore, using the Smith and Pell article to argue that interventions in physical therapy do not necessarily need to be supported by well controlled evidence to be certain of their benefit would be a flimsy argument. To be clear: I am not saying that all interventions need controlled trials in order to justify their use. I am saying that to be confident in the efficacy and effectiveness of physical therapy interventions, you need solid data. For example, the effect of manual therapy in patients with neck pain tends to be small and far from that of the effect of parachute use preventing death in skydivers. Therefore in the absence of evidence from robust controlled trials, one is not able to confidently say that manual therapy for neck pain:
- has a meaningful, specific, repeatable effect
- works via a specific mechanism (ex. resolving a “stuck” facet joint)
- that the outcome is not actually the result of a factors separate from manual therapy such as natural history, regression to the mean, expectation, beliefs, reward etc.
Randomized controlled trials, inclusive of their limitations, are still the best tool available to tease out the effects of a large majority of interventions. Notable exceptions to this do exist, where the effects are so large that randomized controlled trials are not necessary. These exceptions create a paradox of effectiveness, however, they are not a reason for overconfidence in interventions that lack well controlled supportive data. Nor is the Smith and Pell satire an effective criticism of evidence based practice or a reasonable way to justify physical therapy interventions that have not been scientifically vetted.
Photo credit to flickr user m_ragazzon