Evidence vs. judgement: New study examines how doctors make decisions

Benjamin Djulbegovic
Benjamin Djulbegovic, M.D., Ph.D.
Medical decisions almost always involve making choices under uncertainty, so when you consult with doctors, they follow established clinical guidelines to decide on the right tests and treatments. Guidelines are one of the most important tools a physician has — but how good are the judgments that determine them?
 
That’s the question raised in a July article in the Journal of the American Medical Association (JAMA) — and it trended so strongly in the medical community, it quickly became one of the top 5% of all research articles ever tracked by the monitoring service Altmetric and in the top 1% of articles published at the same time.
 
The paper was authored by City of Hope’s Benjamin Djulbegovic, M.D., Ph.D., hematology oncologist, professor and director of research in the Department of Supportive Care Medicine and medical director of evidence-based analytics. His co-author was Gordon Guyatt, M.D., M.S., of McMaster University in Canada.
 
The basis for current guidelines falls into two categories, evidence-based and consensus-based — a distinction the authors call “misguided and misleading."  
 
It’s a strong statement, and we spoke with Djulbegovic to learn more about those categories, where the problem with them lies, and what needs to be corrected so that doctors and patients can make better decisions.

EVIDENCE-BASED AND CONSENSUS-BASED GUIDELINES

The term evidence-based guidelines refers to guidelines developed with evidence that is considered strong, as with randomized clinical trials, in which there are two groups, one that receives the treatment being tested and a control group that receives something else.
 
“But there’s a misconception at play here,” said Djulbegovic. “While evidence-based guidelines do actually pay attention to supporting evidence, there’s a failure to understand that they also require assessing the quality of that evidence before formulating judgments.”
 
Consensus-based guidelines are typically developed when the evidence is considered weaker, such as in nonrandomized trials, which have no control group. In such cases, professional guideline panels often rely on informal consensus — and it’s not always consistent with the available evidence.
 
“To reach a consensus, the smartest guys in the room get together and start talking about how they practice,” said Djulbegovic. “But even if you’re the best expert in the world — and many people believe they are — every single expert is biased, just like any other human being.”
 
And bias is always at play — even when guidelines are evidence-based.
 
“We all observe the world within a preconceived framework,” Djulbegovic said, “so it’s a misconception that evidence alone can generate recommendations without underlying judgments. Whether the evidence is weak or strong, interpretation still comes into play.”
 
Take, for example, the question of mammography.
 
“You actually have the same research evidence interpreted by people in many different countries in different ways,” he said. “In Switzerland they are proposing to eliminate mammography completely because, they say, the benefits are small and the potential harm high. But in the U.S., the American Cancer Society recommends mammography every year for certain age groups.”
 
The JAMA article asserts that all guidelines require both evidence and consensus, and all evidence — weak or strong — must be subject to a rigorous technique of systematic review so that the judgments for guidelines are consistent with the quality of evidence.

THE VALUE OF NONRANDOMIZED TRIALS

Randomized trials are the gold standard for reliable results, but they can be too expensive and difficult to carry out. What’s more, there are times when depriving a subject the test drug might have dire consequences. That’s why nonrandomized trials are sometimes necessary.
 
In fact, in 2012 the Food and Drug Administration launched the Breakthrough Therapy designation, which allows the approval of new drugs based on nonrandomized trials — if the effects are dramatically large.
 
But exactly how large is that?
 
“This is a critical question,” said Djulbegovic. “Experts theorize that it should be five to 10 times larger than data from other nonrandomized trials — but only between 1 and 2% of the drugs approved by the FDA meet this theoretical criteria of dramatic effects.”
 
Djulbegovic and his colleagues analyze the evidence and reveal new information about nonrandomized drug approvals in two articles, one published today in JAMA Network Open and recently in The Lancet.
 
In the JAMA article, FDA applications for 606 drugs from 2012 to August 2018 and for 71 medical devices from 1996 to August 2017 were assessed. Approved applications based on nonrandomized clinical trials (non-RCTs) were also included in the study, which was a systematic review and meta-analysis. Of the 677 applications, 68 (10%) were approved by the FDA based on non-RCTs. A meta-analysis was conducted to examine differences between applications that required further testing with randomized clinical trials (RCTs) and those that didn't. The authors’ report estimated treatment effects were higher for treatments or devices approved based on non-RCTs than for treatments or devices for which further testing in RCTs was required. There was no clear threshold of treatment effect above which no RCTs were requested. (A limitation of the study was the small sample size.)

BETWEEN DOCTOR AND PATIENT

Even with the best guidelines, doctors still have to make judgments based on the needs and circumstances of each individual patient.
 
“We typically like to say, we need to administer the right treatment to the right patient at the right time. The aspiration is clear, but it’s more easily said than done,” said Djulbegovic. “Doctors always struggle with these kinds of things. It’s one reason there’s such a huge variation in practice from doctor to doctor.”
 
But with so much room for interpretation, how should doctors and their patients deal with this uncertainty?
 
“There’s nothing wrong with a doctor saying, ‘Look, the evidence is not great, but this is the best treatment I have.’ Physicians should be openly sharing this information with their patients in order to make the best possible decision together,” he said.
 
And patients need to ask questions.
 
As Djulbegovic put it, “In our evidence-based world, we have a saying: No decision about me without me. It’s your life, your body, your soul.”
 
****