I was reading a journal article recently, covering a topic for which there is precious little information available. The topic, for purposes here, doesn’t matter. The point is that at the end of the article, when the author included the obligatory self-criticism, he lamented, “the primary weakness of our study is that it is not evidence-based.” I wanted to throw up. The article had done a masterful job of filling an information void, yet political correctness using the buzz phrase “evidence-based medicine,” has become so powerful that it turns bright-shining humans into dim-witted lackeys scrambling for acceptance.
What the author should have said is this: “the primary weakness of our study is that it was not a prospective, randomized controlled trial.” Clearly stated. Specifically stated. More accurately stated. Yet, not quite sufficient if one is looking for politically correct self-flagellation.
How did it come to this? What is evidence-based medicine? Haven’t we been doing this all along?
The buzz phrase emerged about 1990, but did not gain a foothold until a decade later when it was linked to the sister-concept of “guideline-based” medicine, sometimes referred to disparagingly as “cookbook medicine.” If you’ve ever wondered why your doctor spends more time in the exam room examining his or her computer more than you, it’s because they are under pressure to document layers of trivia to allow 1) more effective billing, 2) to meet oppressive accreditation guidelines, and 3) to weather the storm of criticism if someone (peer, attorney, patient) should question whether or not proper guidelines were followed.
How did this sociologic transition begin? Here are some reasons – Rising costs and limited resources. The decision of physician-led groups to police themselves rather than wait for the government to step in (of course, it’s always “other specialties” that are causing the problems). Accreditation organizations ballooning like any bureaucracy, generating more and more requirements to follow guidelines, and more importantly, to document whether it’s true or not. The computer revolution that allowed full access to published literature, creating awareness of relative ignorance. Scientific developments occurring so rapidly, no human can keep up, again deferring to computer science. The rising status of epidemiologists (medicine without the blood) and public health specialists who anointed themselves as the only neutral parties worthy of establishing guidelines (part of their ascension to the throne included studies that demonstrated the inability of many physicians to take conclusions from randomized controlled trials and counsel an individual patient correctly). As usual, the causes are multi-factorial.
What is the definition of “evidence-based medicine” anyway? You may have heard the claim that “Medicine is both an art and a science.” When I heard this term growing up in the home of a “general practitioner,” I was told that “art” pertained to bedside manner. Today, the “art” means something different, in my view. Bedside manner is in its own class, better described as “ethics” or “humanity” or plain ol’ kindness and empathy. Instead, the “art” of medicine is filling in the blanks that are left by pure science, using logic and wisdom derived from available facts. Alternatively stated, the “art” is using reason to fill the gaps left by empiricism. It is impossible to settle every issue in medicine with a prospective, randomized trial. Therefore, there will always be blanks that need to be filled in, sometimes using that nebulous tool “judgement.”
Francis Bacon (1561-1626) is sometimes referred to as the “father of empiricism” (and/or “father of scientific method”) based on his philosophical stance that inductive reasoning should guide science, not the old-fashioned syllogism or rational deductive reasoning. But even Bacon cringed at schools of thought based on pure empiricism, claiming that this approach “gives birth to dogmas more deformed and monstrous than the Sophistical or Rational School.” His famous parable of the spider (pure rationalism, spewing forth silk from within to weave a web), the ant (pure empiricism, collecting grains of dirt, but nothing from within), and the ideal of the honey bee (a blend of both empiricism and rationalism, collecting pollen and offering honey in return) reveals that regardless of his devotion to inductive reasoning and empiricism, one should be well-grounded in reason, i.e., rational thought. It is the blend of empiricism and rationalism that generates honey.
Evidence-based medicine uses a process that is admirable, organizing what was already known about high quality evidence vs. low quality, into systematic rankings for these quality levels, used both for individual publications as well as guidelines. While some claim the top of the pecking order is the prospective, randomized trial, there’s actually a qualifier that generates even higher quality evidence – double-blinded – that is, both the patient and the researcher are blinded to the intervention, be it pill or placebo. It should be evident that some trials cannot be double-blinded, and I’m referring to the area where I practice – up front medicine heavily focused on radiology and surgery. It’s difficult to ask a surgeon to perform a procedure blindfolded, or a radiologist to interpret an MRI without looking at the picture (though radiologists can be blinded to the final pathology).
Dropping down a notch on the evidence scale opens up all sorts of potential bias, too numerous to describe here. But even a well-designed clinical trial has one over-riding problem – it may not translate to the real world. The paradox here is that the greater the number of exclusion factors one uses to control for variables in the clinical trial (raising the quality of the data), the more restricted is the population to whom the results apply. While guidelines are sometimes careful to note these limitations, this does not always translate to actual practice.
Still, evidence-based medicine is inherently a worthy goal. The problem is going overboard. Academic departments in Evidence-based Medicine have emerged (maybe this is just re-christening of Epidemiology), and organizations sprout more and more guidelines, which are much more than suggestions – they are very strong recommendations that put a physician on the defensive if not followed. Even though these guidelines may be physician-generated, insurers don’t necessarily follow suit, instead, generating their own set of guidelines, differing from one insurer to another.
Rigid devotion to empiricism has many untoward side effects, including the development of guidelines that are logically inconsistent. For instance, guidelines for SERM risk reduction (pharmacologic risk reduction) in high-risk women are based on the inclusion/exclusion criteria of the clinical trials that proved effectiveness. Fine. Then, the use of high-risk screening with breast MRI are based on the criteria used in different clinical trials. Fine. But now the bottom line: inclusion/exclusion criteria were markedly different in these two available interventions. As a result, women who qualify as high-risk for SERM risk reduction may not qualify for MRI, and vice versa. The illogical result? “Here, take this pill every day for 5 years to lower your risk of breast cancer, and here’s the host of side effects you need to know about, including uterine cancer or even death due to pulmonary embolus. And by the way, you’re not at high enough risk to warrant recommending a breast MRI.” Really?
In 2013, one of the nation’s pre-eminent breast oncologists, Harold Burstein, MD (Dana-Farber/Harvard), wrote an engaging editorial in The Breast about his experience at the St. Gallen (Switzerland) breast cancer conference, entitled: “Expert opinion vs. guideline based care: The St. Gallen Case Study.” He wrote, “In contrast to the current American craze for detailed guidelines and pathways, the St. Gallen meeting unabashedly seeks to find expert consensus. There are no checklists of tests. No defined pathways. No lists of preferred regimens. No arrows pointing one of three ways based on a decision node….The tenor is to provide a direction for care that covers most of the patients rather than to script the design of care to be given to all patients with few exceptions…”
Dr. Burstein realized that this approach can be viewed in a negative light, “The looseness of the St. Gallen (conference) process alternately charms and appalls many observers from the U.S. This is particularly the case for those who look to St. Gallen to define standards of care that are transmittable to third-party payors, hospital administrators, and programmers who write electronic health record templates.”
Perhaps, Dr. Burstein was in the “charmed” group by virtue of his master’s degree in the history of science where one is exposed to the many philosophical theories as to what constitutes “the scientific method,” along with the fact that many major scientific discoveries used no methodology at all, other than rational thought. He closes his editorial with, “The current enthusiasm for guidelines and pathways has innumerable merits. But one necessary weakness is the assumption that clinical expertise can be fully bottled, packaged and shipped around the world. For those who cherish learning from wise colleagues and exploring the endless variations of clinical care, it is a delight that meetings such as St. Gallen continue to flourish.”
My thoughts on the topic are identical, but my spin a little more critical. Whereas these guidelines serve well to bring everyone up to a minimum standard, they do not encourage excellence above and beyond guidelines. Quite the contrary, the absence of a guideline can squash excellence. Witness the fact that it took Mel Silverstein, MD, arguably the most knowledgeable doctor in the U.S. on DCIS (Stage 0 breast cancer), 12 years to get his recommendation for “wide excision alone” into the NCCN guidelines. Why? His reasoning was superb, not to mention the cost-effectiveness of his approach, but his data was considered “low quality,” i.e., from non-randomized observational studies, even though he followed a strict protocol. As a breast surgeon, I adopted his system as supremely logical the first time I heard it. Those of us who accepted the Van Nuys protocol had to endure criticism from peers (for not irradiating everyone with DCIS) while Dr. Silverstein fought for recognition of his approach. Even after he managed to get his guidelines into print, there’s a notation that this is a “2b” recommendation, based on low level evidence — an asterisk, much like Roger Maris.
My beef is not with concept of evidence-based medicine and associated guidelines, in principle. My beef is with the by-product of obsessive preoccupation that seems to go hand-in-hand. I can offer many examples in my area of expertise (especially breast MRI) where excellence is squashed, and ignorance perpetrated, through slavish devotion to illogical guidelines.
Another twisted by-product of “neutral evidence-based medicine” is the fact that guidelines are no longer considered reliable when written by experts in an area who are also providers of health care in that same area. Now there’s a new concept. How do you find an expert to help establish guidelines who does not practice in that particular area? Answer: you don’t. You use experts in numbers and statistics, not experts who actually uses the proposed guidelines.
Understanding that one purpose of evidence-based medicine is to eradicate human bias (an impossible task), I can agree to go as far as restricting experts to non-voting status on guideline committees, or at an absolute minimum, allowing experts to testify at guideline meetings in order to put things into perspective through revealing nuance lost in raw statistics. Instead, some “think tanks” totally exclude practicing experts from the process. A good example is the U.S. Preventive Services Task Force on breast cancer screening, where they not only refused to consider any observational studies of screening mammography, but also refused to hear testimony from radiologic experts on screening mammography, much less have one serve as a non-voting member of the committee.
My beef is with the fact that while, in principle, evidence-based medicine is a worthy goal to provide a stronger basis for science in medicine, in fact, it is evolving with a more extended goal, that is, science to the exclusion of art. It is “high-quality data,” which may or may not correlate with Reality, to the exclusion of logic and wisdom. Ultimately, it will serve to control medical practice by those who don’t do it.