Abstract: Is evidence based medicine the most appropriate paradigm for advancing clinical knowledge? There is increasing discussion of how evidence and science guides clinical medicine and the accumulating awareness that individualized medicine inevitably falls within a clinical gray-zone. Here we argue that the basic proposition that an analysis of historical data from controlled trials can objectively and efficiently decipher what treatments are uniformly superior is fundamentally flawed. We also argue in particular that in such a complex system as acute medicine it is predictable that randomised control trials will frequently lack the fidelity to give definitive or even useful answers, especially around the margin of progress.
Author(s): Michael Keane, Chris Berg
Journal: Trends in Anaesthesia and Critical Care
Vol: 9 Year: 2016 Pages: 49–52
DOI: 10.1016/j.tacc.2016.07.002
Cite: Keane, Michael, and Chris Berg. “Evidence-based Medicine: A Predictably Flawed Paradigm.” Trends in Anaesthesia and Critical Care, vol. 9, 2016, pp. 49–52.
1. Introduction
Is evidence based medicine (EBM) the most appropriate paradigm for advancing clinical knowledge? There is increasing discussion of how evidence and science guides clinical medicine [1] and the accumulating awareness that individualized medicine inevitably falls within a clinical gray-zone.
While there are legitimate arguments relating to the precise definition of EBM, by EBM we refer to the overarching paradigm which gives primacy to “the formal assessment of medical interventions using controlled trials” [2], in particular the randomized controlled trial (RCT). Here we argue that the basic proposition that an analysis of historical data from controlled trials can objectively and efficiently decipher what treatments are uniformly superior is fundamentally flawed. We also argue in particular that in such a complex system as acute medicine it is predictable that RCTs will frequently lack the fidelity to give definitive or even useful answers, especially around the margin of progress.
The goal of this Opinion article is not to provide a comprehensive review of the individual failings of EBM. Overall, we agree with the conclusion of a classic paper by Ioannidis which contends that most published research is unreliable anyway [3]. One overview found that 35% of published re-analyses of RCTs gave a result contrary to that originally published [4].
Rather, we observe that the authors of such critiques always propose the solution as being more and more rules in an attempt to achieve purer EBM [5]. The solutions always involve an underlying assumption that the scientific method of controlled experimentation using RCTs must be the most sophisticated way to determine allocation of interventions and treatments. This belief is enshrined in the levels of evidence that underlie the EBM paradigm [2]. A generation of clinicians have accepted that while EBM has problems it still represents the most rational and objective method of advancing medicine; it just needs to be practiced in the right way [6].
Yet EBM misunderstands the interactions between the complex systems of biology, clinical medicine, the human condition and lessons learned in other fields of human inquiry. Basic science is appropriately carried out using the scientific method of controlled experiments. But clinical medicine is not necessarily the same as basic science. Bridging the gap will involve a fundamental paradigm shift.
In this Opinion article we argue that the scientific method of controlled experiments has been the wrong paradigm to optimize information gathering in such a dynamically changing and complex system as acute clinical medicine. Clinical medicine should, more appropriately, be conceptualized as a marketplace of ideas – a market of inordinate combinations of competing ways of caring for patients from which clinicians have to choose from. Each of these inordinate combinations of interventions have a difficult to predict and complex array of interacting benefits and long term and short term side effects occurring in an unfathomably complex, dynamically evolving system of biological interactions. A model which demands definitive “evidence” based on historical controlled scientific experiments would not be expected to have the fidelity to deal with individual patients, especially around the margin, nor have the flexibility to effect progress in the most efficient way in such a complex and changing system.
The problem of system complexity and legibility is not unique to medicine. Section 1 of this Opinion article considers a parallel field e economics e where such a paradigm shift has been accomplished. Section 2 uses this parallel to explore what we call the “evidence based paradox” e that clinically relevant evidence uncovered by RCTs is contestable at the margin and therefore less objective than understood. In Section 3 we propose an alternative paradigm and address its relevance for clinical practice. Because what we are proposing as a response to the failings of EBM is so radically different it is important to clarify the following points up front. To reject the EBM paradigm does not mean we suggest that practice should not be based on rational inquiry or objectivity. Quite the opposite: we propose that the system must incorporate far more information and data. “Market signals” should be constantly generated and considered, including all effects, side effects and remedies to the side effects that occur. A more ‘market’ oriented approach would consider every conceivable piece of relevant information, practice would move forward and rigorous analysis would be undertaken to see what works and what doesn’t in the context of continuous open and intense competition; with individual clinicians, institutions and professional bodies all involved.
Nor are we rejecting science. The scientific method is important to get us to the starting point, as relates to the basic science of drugs and different operative interventions. Science leads us to a point where we can observe that a drug has a basic pharmacological effect that can be measured in reproducible experiments. However, clinical medicine involves the next step. It requires practitioners to consider how to best put together the inordinate combinations of different investigations, medicines, interventions, to anticipate the negative consequences of that combination and to adjust and additionally treat those effects. The static model of historical evidence represented by RCTs would never be expected to be the optimum way to deal with such a complex system.
2. An analogy to economics
One way to think about the epistemological problems of EBM is to consider the analogous situation of an economic system. Throughout the twentieth century many academics championed the principle that central authorities were best placed to allocate resources. Central planning had at its heart the premise that rational experts, armed with the best computational power available, could understand the complex economic system and outperform the market.
Two critiques of the rational planner model of economic allocation are relevant to the EBM discussion. Long ago, Hayek argued that planners face a fundamental knowledge problem [7]. The market system is a disaggregated communication network which harnesses local information and distributes it around the economy. Knowledge about efficient allocation depends on subtle on-theground factors, constantly changing economic, social and technological conditions, shifting individual preferences and so on; which is exactly what we face in clinical medicine. Central planners are unable to acquire that knowledge as it is both dispersed and contingent on the existence of an external process – a market – in order to arise [8].
A more recent contribution by Scott adds to our understanding about the effect of top-down approaches to knowledge in a complex system [9]. Scott describes the process whereby planners seek to make social phenomenon “legible” by deliberately reducing the variation natural in the system. Planners require aggregated information in order to rationally reshape the society they command. This involves the grouping together of distinct phenomenon into uniform categories. Yet the process of seeking legibility comes at the expense of fidelity. Natural variation is often fundamental to the operation of the system itself.
Here we emphasise the concept that an economic system is too complex to understand and centrally plan using backward looking data. By contrast, market allocation was more flexible and adaptable – information was constantly gained and most importantly adaptations were constantly being made to correct problems. Entrepreneurs make decisions based on their best reasoning of the information available to them – including information about historical trends and technical analyses. Inefficiencies (lack of effect or side effects) are elucidated during the ongoing process of implementation and are most effectively corrected through the intense competition of alternative approaches.
No analogy is perfect and there are obvious differences between resource allocation in an economic system and medical practice. However, there are important parallels. In clinical medicine the research question being asked is always occurring in the context of a dynamic, complex, changing and individualized system. Information about the effect of interventions, especially in variable and changing combinations, will more efficiently be elucidated by open competition than by trying to isolate a variable through the experimental method. But such a free-floating, evolutionary vision of medical progress is strikingly counter to that which drives EBM.
3. The evidence base paradox
The “evidence” regarding the efficacy of a given treatment is not an objective, non-contingent phenomena. It is instructive to appreciate what could be called the evidence base paradox [10]. That is, the more something really needs an objective assessment of the evidence, the less it is possible to gain an objective answer free from ideological bias and other human frailties. If the margin of the clinical effect of an intervention is so wide, or the lack of effect so solid, that it can survive the most stringent debate related to transferability to the constantly changing, dynamic, complex system of clinical medicine, then the intervention is obviously not in the gray area of medicine and an RCT is not needed. It is on the other end of the spectrum, where the intervention is around the margin of practice, that the problems with the RCT experimental paradigm becomes apparent. At this margin, the smallest methodological quirks or intricacies of a study will potentially be magnified to render the result irrelevant. It is impossible to know. The closer to the margin, the more the effect of loss of fidelity from making the system legible will outweigh the real effect of the studied intervention. This is especially so as practice continually changes and practitioners negate the bad elements and enhance the good elements of an intervention and background community medication and treatment practices change. We ought to concede that evidence-based medicine is still, especially around the margin of practice, no more than opinion-based medicine.
This is compounded by the fact that in acute care medicine, only an approximate version of the scientific method is usually used anyway. Medicine is not about setting an experiment and letting it run to a conclusion regardless of the unique circumstances of patient. Practitioners have an ethical obligation to make sure all changeable elements are optimized to their contemporary understanding of best treatment. Thus, the experimental milieu is constantly changed during the course of the experiment. Furthermore, if the rigor demanded of scientific experimentation was demanded of EBM, especially as it relates to true blinding, then it is an unavoidable axiom of EBM that it seeks to find a clinically significant difference for an intervention that is at the same time, by definition, clinically indistinguishable to the clinician during the course of the “experiment” [11].
The evidence base paradox undermines the claim that RCTs represent a gold standard method for determining the effect of an intervention. In order to make the clinical scenario “legible” enough to study, this necessarily reduces fidelity. The RCT’s lack of fidelity is akin to slicing cheese with an ax. All new medical knowledge emerges in the context of a constantly evolving dynamic system. The difference that a given studied intervention will make must be small and contentious (if it is obvious it would already be taken up by clinicians). However the artificiality of having to have rigid protocols and nominal groupings of patients categories is, at the margin, too often going to outweigh the best possible difference that the intervention could have if all the optimum clinical “levers” were allowed to be used to compensate for predictable consequences of the intervention and to treat differently those that initially respond differently. The practice of deliberately hiding critical and actionable clinical information in a marginal intervention inevitably creates an X-factor regarding what the intervention could do, often significantly overwhelming the benefit of the scientific method to isolate a sole variable. Attempting to make the incomprehensibly complex system “legible” by using techniques such as continuously variable transformation [12] or blunt statistical mechanisms such as intention-to-treat analyses inevitably renders any results highly contestable and debatable. In clinical medicine the ultimate determinant of what practices clinicians take up is the consequence of intense competition (and constant analysis) between different methods at the extreme margin of contemporary practice. For example, this is how survivability of sepsis has increased, independently of a formal EBM framework [13,14].
4. Clinical practice in a complex system
EBM’s utility for practice has been much overstated. An RCT presents a practitioner with historical probabilities about the effectiveness of a given practice. It is non-trivial to discern how relevant these findings are for an individual patient [15] in a particular (locally influenced) method of practice and which are possibly made redundant by contemporary changes or deliberate adaptations to results from the study itself. The fidelity of a study will almost always be less than the disagreements about the meaning of the study in relation to individualized medicine – unless the difference is so obvious that a study was not needed in the first place.
Human ingenuity will always attempt to mitigate the negative components of the intervention that was studied. Practitioners will be judged on results and compared against the practices of all the competing clinicians and institutions around the world. If a given intervention really does or does not appreciably change the outcome of any of the inordinate number of indices that concern patients and physicians then open competition would not allow there to be an ongoing disequilibrium of practice. Once again, this will be the final arbiter of practice, not an RCT.
How can this dilemma be resolved, without abandoning clinical medicine’s attachment to objectivity and science? Clinical medicine is never, nor should ever be, polite enough to stand still to facilitate the EBM paradigm. Like a flowing stream, clinical anaesthesia and intensive care medicine is always changing with a virtually unlimited combination of drugs and interventions; at the same time surgical techniques are evolving and background community health and medication patterns are changing. Practice involves a continually updating information flow harnessed by continued adjustment then assessment of what is working and what is not working. The challenge facing clinicians is managing that information flow. Hayek’s metaphor of the market as a dynamic information network offers a more fruitful frame through which to see clinical medicine than the static model of scientific experimentation.
The rise of adaptive clinical trials (ACTs) – where knowledge gained during the course of the experiment is used to redirect or adapt the direction of the trial – is one attempt to resolve the lack of knowledge and fidelity inherent in conventional EBM that we identify. However, although more efficient than conventional RCTs, ACTs are still premised on the need to isolate a variable using the scientific method. A further innovation, platform trials, are an extension of adaptive trial design [16]. Platform trials attempt to address the inordinate complexity of assessing the effect of combinations of multiple interventions on a disease.
Different clinical questions might be more or less amenable to investigation using RCTs. Descriptive, qualitative conceptual relationships, or laws of clinical complexity, could guide clinicians when assessing the ultimate usefulness of experimental data. For example, the more the manifestations of an intervention are apparent and actionable in real time, the less it is plausible that the study is blinded.
This discussion is particularly relevant to the field of anaesthesia in regards to various new sedative/induction agents that are on the horizon [17]. These are drugs which the anaesthesia provider gives and adjusts in real time. Going through the process of pseudo [11] blinding clinicians to these drugs as they are experimenting with them in real time for the sake of adherence to an RCT paradigm would be predicted to reduce the fidelity of any investigation. Once basic science experiments have demonstrated certain safety and efficacy parameters, the usefulness of these sedative drugs will be decided by the cumulative experience of clinicians using them to their very optimum level. This will allow clinicians to decide which drugs are useful for which circumstances based on the combinations of rare, common, short- and long-term effects and side effects. Conversely, elucidating long term postoperative cognitive dysfunction from inhaled anaesthetics would potentially be more amenable to an RCT.
5. Conclusion
“The ability to reproduce experiments is at the heart of science,” [18] However, in a constantly and rapidly changing, complex system, in order to make the system legible enough for an intervention to be reproducible across all the different changes to practice that can occur, the loss of fidelity becomes prohibitive. The scientific community is starting to acknowledge the problem of lack of reproducibility in even basic science [18]. Consider the significantly more difficult problem of having confidence in reproducibility in the rapidly changing environment of acute clinical medicine.
Therefore, it is vital that the medical profession engage in a deeper, more conceptual and philosophical debate about the role of EBM and what the scientific method can and cannot contribute to advancement of clinical knowledge. After a generation of EBM, Ioannidis has shown how little we’ve actually learned. This Opinion article has not looked at individual studies, rather focused on elucidating an alternative conceptual frame. However we are sure readers can think of influential trials that have required practitioners suspend disbelief in order to squeeze them into the EBM paradigm – large multi-centre trials that were meant to definitively resolve ambiguity from smaller trials have themselves produced data and findings that are highly debatable due to highly controversial dosing, lack of regard to individual and contemporaneous patient physiology, lack of plausible blinding due to an intervention’s putative mechanism being something that the clinician is directly monitoring in real time; lack of acknowledgement of background community changes in medication use; and lack of compensation for side effects that are known to be associated with the intervention.
It is important to acknowledge what our marketplace-of-ideas paradigm (based on open competition to achieve ever better results) does not solve. Our alternative paradigm would not eliminate the possibility of misconceptions about appropriate treatment. The philosophy behind EBM is the search for objective knowledge about interventions and their effects. Thus it seeks to remove the individual biases and prejudices held by clinicians and scientists about the efficacy of a given treatment. We have argued that at a clinically relevant margin these biases and prejudices are still determinative of practice within the EBM paradigm. Indeed, EBM obscures rather than acknowledges human flaws in instrumental reasoning, providing practitioners with a false certainty about treatments and effects. Our alternative paradigm of open competition with clinicians trying to achieve better results, does not risk a reversion to pre-scientific medical practice. Far from rejecting scientific investigation, our paradigm of market signaling in the context of open competition offers a way for practitioners to think about the way scientific knowledge can be used in a way that preserves fidelity and contrasts with the reductionist approach of EBM.
It is widely acknowledged that EBM is flawed. The answer is not to double down on EBM, but to change, in many circumstances, to another paradigm. Persisting with the EBM paradigm could hold back the information gathering and dynamic learning that otherwise results from the more rapid “market” signaling of what works. Practitioners must not do to patients with EBM what scientific planning did to the economies of the old Soviet bloc.
References
[1] J.J. Marini, J.L. Vincent, D. Annane, Critical care EvidencedNew directions, JAMA 313 (9) (2015) 893e894.
[2] Jonathan Belsey, What is Evidence-based Medicine? The what is…? Series. accessed at http://www.medicine.ox.ac.uk/bandolier/painres/download/ whatis/ebm.pdf 25/06/2016.
[3] J.P.A. Ioannidis, Why most published research findings are false, PLoS Med. 2 (8) (2005 Aug) e124.
[4] S. Ebrahim, Z.N. Sohani, L. Montoya, A. Agarwal, K. Thorlund, E.J. Mills,
J.P.A. Ioannidis, Reanalyses of randomized clinical trial data, JAMA 312 (10) (2014) 1024e1032.
[5] J.P.A. Ioannidis, M.J. Khoury, Assessing value in biomedical research: the PQRST of appraisal and reward, JAMA 312 (5) (2014) 483e484, http:// dx.doi.org/10.1001/jama.2014.6932.
[6] T. Greenhalgh, J. Howick, N. Maskrey, Evidence based medicine: a movement in crisis? BMJ 348 (2014) g3725.
[7] F.A. Hayek, The use of knowledge in society, Am. Econ. Rev. 35 (4) (1945) 519e530.
[8] J.M. Buchanan, Readers’ forum, comments on ‘the tradition of spontaneous order’ by norman barry, Lit. Lib. 10 (4) (1982) 5e18.
[9] J.C. Scott, Seeing like a State: How Certain Schemes to Improve the Human Condition Have Failed, Yale University Press, New Haven, 1998.
[10] M.J. Keane, Copayments and the evidence base paradox, Med. J. Aust. 202 (2) (2015) 68e69.
[11] K.F. Schulz, A. David, D.A. Grimes, Blinding in randomised trials: hiding who got what, Lancet 359 (2002) 696e700.
[12] O.O. Nafiu, B.W. Gillespie, A. Tsodikov, Continuous variable transformation in anesthesia useful clinical shorthand, but threat to research, Anesthesiology 123 (3) (2015 Sep) 504e506.
[13] The ARISE Investigators and the ANZICS Clinical Trials Group, Goal-directed resuscitation for patients with early septic shock, N. Engl. J. Med. 371 (2014) 1496e1506.
[14] H.J. Priebe, et al., Letters to editor, goal-directed resuscitation in septic shock, N. Engl. J. Med. 372 (2015) 189e191.
[15] A. Chandra, D. Khullar, T.H. Lee, Addressing the challenge of gray-zone medicine, N. Engl. J. Med. 372 (2015) 203e205.
[16] S.M. Berry, J.T. Connor, R.J. Lewis, The platform trial an efficient strategy for evaluating multiple treatments, JAMA 313 (16) (2015) 1619e1620.
[17] J.W. Sear, T.D. Egan, David (propofol wannabes) versus goliath (Propofol): AZD-3043 goes up against the giant, Anesth. Analg. 121 (4) (October 2015) 849e851.
[18] Nature Editorial, Reality check on reproducibility, Nature 533 (2016 May 26) 437 doi:10.1038/533437a.