You are on page 1of 10

Elena Viboch Medical Decision Making

Introduction
Good medical decision making is something that we expect from our doctors and hope to achieve

ourselves. We hope for doctors who will take every measure to ensure the best possible outcome

and who will guide us towards optimal medical decision making. When a doctor presents us with

the relevant evidence and a choice, when he says “You have all of the information, now it’s up to

you to choose the treatment, if any, that you prefer,” we hope that we will be able to make the best

possible decision.

Decision making theory can provide a normative framework for evaluating medical decision

making and can identify cognitive biases than can affect medical decision making. What Doctors

Don’t Know (Almost Everything),” by Kevin Patterson, discusses the use of evidence-based

medicine; “The Perils of Prevention,” by Shannon Brownlee, addresses a bias toward preventative

treatment. Confirmation bias, incorrect probabilistic reasoning, more specifically failure to rely on

Bayesian updating, and omission bias appear in the problems the authors discuss and limit the

efficaciousness of some of the solutions they suggest. Before addressing the decision making theory

that relates to the types of medical decision making Patterson and Brownlee discuss, it is important

to outline the arguments that the authors present. The analysis will discuss cognitive biases and

failures to conform to the normative model of decision making and will explicate where these

authors’ accounts are consistent with decision making theory and where they deviate from the

normative understanding of decision making.

Articles
Patterson argues against a medicinal hierarchy. Doctors hold a position of authority from which they

pass down prescriptions, diagnoses and advice; patients are not empowered with decision making or

input of their opinions and preferences. Patterson predicts that “evidence-based medicine,” the

practice of basing medical decision making on evidence from controlled clinical studies, will
revolutionize medicine by empowering patients to make their own medical decisions. In his model

of optimal medical practice, the patient is autonomous and the doctor becomes a conduit for the

information rather than the decision-making authority. Although Patterson identifies the normative

value of evidence-based medicine, he overestimates the ability of patients to assess empirical

evidence and make optimal medical decisions.

Brownlee uses empirical evidence to analyze flaws in doctors’ and patients’ medical

decision making and their tendency to view early detection and prevention as the ideal course of

action. She contends that the common bias towards screening and preventative action results in

frequent use of costly and sometimes risky procedures in cases where they are not necessarily

helpful. The treatments were intended for the full blown disease and approved based on their

efficacy in such cases. Now, tests that were intended to serve as diagnostics for patients with

symptoms are given routinely to screen for disease. Even if patients do not have any apparent

symptoms, doctors will aggressively treat them when screening identifies indications of disease.

Doctors and patients view this as “erring on the side of caution.” Brownlee highlights that monetary

considerations aside, aggressive early interventions often do not have benefits that outweigh their

costs; the aggregate benefits to the few who would have gone on to develop the disease are

outweighed by the overall costs in terms of the health risks of treating numerous patients who on

average would have been very unlikely to develop the disease even without treatment. This method

of decision making is inconsistent with normative theory because the overall costs of preventative

treatment outweigh the benefits.

Confirmation Bias
Patterson uses the weakness of doctor’s clinical judgments as evidence in favor of giving patients

empirical evidence and the responsibility to make their own decisions. Patterson recognizes that

“people, doctors included, have a tendency to see what they expect to see,” which is an informal
way of describing a bias in cognition called confirmation bias. This bias is also active in producing

the flaws in medical decision that Brownlee identifies. To borrow Brownlee’s title, the “perils of

prevention” may be difficult for doctors to recognize if they believe that aggressive preventative

treatment is effective because their confirmation bias may reinforce their belief. They may even be

subject to attitude polarization, believing more strongly in their original hypothesis because their

clinical observations consist of ambiguous or mixed evidence.

Confirmation bias is the tendency to search for, interpret, or recall observations that confirm

your hypothesis and to neglect to seek falsifying evidence or to discount evidence that does not

conform to the hypothesis. Lord, Ross and Lepper found that people presented with equal amounts

of evidence supporting two side of a question will accept confirming evidence and critically

evaluate disconfirming evidence, an example of confirmation bias. Their subjects demonstrated

attitude polarization, shifting more strongly toward their original points of view after examining

mixed or ambiguous empirical findings. Confirmation bias, which is a product of the availability

heuristic, is an automatic, intuitive, System I process that is not available to conscious thought. In

other words, people are generally not aware that they are influenced by confirmation bias. Doctors

do not choose to remember cases that confirmed their hypothesis; but rather, those cases that

confirm their hypothesis are more salient and, thus, more available to memory later. The availability

heuristic, which is also a System I process, causes people to judge probability by thinking of

examples. They substitute the question “how frequently can I remember event A occurring?” for

“how frequently does event A occur?” The availability heuristic leads people to judge an event’s

frequency by the ease with which it comes to mind. The availability heuristic is operative in

confirmation bias, leading doctors to believe that their more salient clinical observations that

conform to their hypotheses are more frequent and, thus, confirm their hypothesis.
Confirmation bias is active in maintaining doctors’ belief in preventative action. Brownlee

discusses the increase in treatment for prostate cancer resulting from widespread screening as an

example of doctors and patients favoring too much preventative treatment. PSA screening results

prompt doctors to aggressively treat prostate cancer, often by removal of the prostate. Many doctors

treat low risk, asymptomatic patients. The prevalence of preventative treatment presumably reflects

widespread belief in the hypothesis that the benefits of treatment outweigh the costs for these

patients. Doctors’ convictions may be reinforced by confirmation bias when they attempt to assess

their hypotheses by reflecting on their clinical experiences. Among cases in which patients did not

have their prostates removed, patients who subsequently developed prostate cancer are more salient

than those who did not develop prostate cancer in their lifetimes. A doctor evaluating whether the

treatment is beneficial will try to recall the outcomes of his patients’ cases. Available cases will

more readily come to mind. The doctor will be convinced by his biased assimilation of observations

that his clinical experience supports PSA screening and an increase in preventative treatment.

Empirical evaluation of aggressive prevention might reach a different conclusion about treatment

efficacy than that of doctors’ assessing their clinical observations. Patterson’s prescription of

evidence-based medicine may alleviate some of the problems which Brownlee identifies in

preventative treatment.

Numerous studies have observed that actuarial judgments, which prescribe a course of

action according to a consistent decision rule derived from empirical evidence, perform as well as,

or better than, the best doctors’ clinical assessments and subsequent choices of action. Similarly, as

Patterson argues, turning to empirical studies to guide evidence-based medicine will improve

treatment by eliminating clinical practices that are not efficacious but are maintained by

confirmation bias. Patterson correctly identifies that doctors’ conclusions are often biased and that

evidence-based medicine provides more accurate judgments. However, he assumes that the logical
consequence of shifting from relying on clinical judgment to empirical evidence of treatment

efficacy is for doctors to present evidence to patients and allow them to make their own medical

decisions. Research in decision making and probabilistic reasoning provides evidence that giving

the weight of decision-making to patients is not the best way to improve medical decision making

and outcomes.

Bayesian Analysis
Assigning the burden of difficult probability judgments to patients who are under emotional stress

and most likely have no background in probabilistic reasoning is a very poor idea. Patterson’s

prescription fails to take into account patients’ limited probabilistic reasoning, a critical tool in

evaluating empirical data to make a decision. Bayesian updating is the normative method of

revising the probability that a hypothesis is true based on evidence that it is true.i If patients cannot

perform Bayesian analysis, then they will make medical decisions on the basis of incorrect

assessments and are unlikely to choose the option that best achieves their goals. Brownlee discusses

patients’ and doctors’ decisions regarding preventative action, which often does not result in the

optimal outcome. Due to flaws in their probabilistic reasoning, patients and doctors interpret

evidence from screening incorrectly which biases their decisions to favor preventative treatments

that have suboptimal results.

Empirical evidence indicates that most people perform poorly at revising probability

judgments. Research has shown that even doctors make incorrect probability judgments from

screening and diagnostic results, failing to perform Bayesian updating. Evidence from testing

should enable doctors to revise their judgments of the probability that the patient has a disease from

the estimated prior probability of disease. The relevant data for revising probability estimates are

the true positive rate, in which the test correctly identifies the disease, and the false positive rate, in

which the test indicates the disease when the patient does not have the disease. This data is
combined via Bayes’s theorem with the patient’s prior probability of disease and prior probability of

not having the disease to determine the updated probability that a patient has the disease given that

she tested positive. The updated probability estimate can guide subsequent decision making.

Patterson claims that doctors are often unreliable, which is plausible in light of confirmation

bias and doctors’ poor probability judgments. Patterson suggests the responsibility for medical

decision making should be given to patients. He does not consider that, lacking the experience and

greater mean mathematical educations of doctors, patients are less likely to reach use Bayesian

updating to make probability judgments. Patients would base their decisions on their incorrect

probability judgments, resulting in suboptimal outcomes. Doctors’ difficulty with Bayesian updating

could be ameliorated by greater emphasis on probabilistic reasoning in medical school and

continued education, but it is not feasible to educate an entire population in Bayesian updating.

Unlike Patterson, Brownlee recognizes patients and doctors demonstrate poor probability

assessment; in particular, she focuses on their preference for prevention in cases where the test

results do not result in a normatively revised probability estimate that necessitates treatment.

Elective angioplasty has proliferated in response to excess angiogram screening. Evidence of

narrowing is incorrectly used to revise the probability judgment of an imminent heart attack; the

resulting overestimated prompts preventative treatment, angioplasty. Increased screening provokes a

rise in treatment that is often not correlated with a corresponding decrease in mortality from a given

disease because the screening, via incorrect probability judgments, causes treatment of patients who

would have been unlikely to develop the disease if they had not been treated. Brownlee’s example

supports an implication of Bayesian updating, that screening is unnecessary in cases in which, given

either a positive or negative result, normatively revised probability estimates do not necessitate

preventative treatment. Doctors and patients choose screening when the evidence cannot change

the normative optimal medical decision, often believing that any information that can be collect
should be which creates an additional opportunity for poor Bayesian updating to impact medical

decision making.

Omission Bias
The choice of aggressive preventative treatment may not only be influenced by patients’ and

doctors’ poor probabilistic reasoning, which leads them to an overestimate of disease probability,

but also by a common intuition that Brownlee identifies. Most doctors and patients feel that “it is

better to err on the side of doing more rather than less” or to “err on the side of caution.” Their

intuitions reflect omission bias, an automatic, intuitive System I process. Omission bias leads people

to judge two equally bad outcomes differently depending on whether the outcome resulted from an

act of commission or omission. Actions that deviate from the status quo, commission, are more

morally reprehensible than failures to act, omission, in a situation in which not acting is the norm.

Patient-driven medical decisions may be more influenced by omission bias than doctor-

prescribed decisions. Doctors are more likely to become aware of, and compensate for, omission

biases than patients. Doctors have greater exposure to actuarial methods, which accurately weigh

the costs and benefits of omissions and commission, and have more experience in making medical

decision than patients who are unlikely to be aware that they are influenced by omission biases.

Omission biases are operative in patients’ and doctors’ choices concerning preventative

treatment. Brownlee gives the example of angiograms; when doctors observe evidence of narrowing

in angiogram they want to do something about it. In addition to doctors’ incorrect revision of their

probability judgments, they respond to angiogram evidence that indicates an uncertain future

potential for heart attacks with the intuition that they must err toward treating every narrowing as if

it could potentially cause an immediate heart attack. By favoring treatment, which is the norm, they

hope to prevent some of the low risk patients’ potential heart attacks. In medicine, taking action and

treating indications of disease are the norm; preventative action is in effect not acting, an omission.
Choosing to not act is an act of commission because it is a deviation from the norm. Patients and

doctors seek to avert the possible risk of heart attack from even the mildest cases of narrowing of

the arteries without considering the costs and benefits of preventative action. Angioplasty has heath

risks and monetary costs, but omission bias causes most doctors and patients to neglect the costs of

omission, in this case the costs of preventative action.

Conclusion
Research indicates, and our discussion of confirmation bias supports, that evidence-based

medicine’s empirical tests of hypotheses and actuarial prediction result in better medical outcomes

than clinical judgments. Patients should not be responsible for trading off the costs and benefits of

medical decisions. Actuarial assessments would be very difficult to transmit to patients due to

confirmation bias, hard for them to assimilate due to probabilistic reasoning that violates the

normative standard of Bayesian analysis, and difficult to get them to act on because of omission

bias. Patients rely more on their intuitions than doctors and would not be able to reap the benefits of

evidence-based medicine’s prescriptions because of their limitations in processing and the time

costs of evaluating evidence. Doctors are also subject to confirmation bias, poor probabilistic

reasoning and omission bias, but the costs to doctors of receiving training to enable accurate

assimilation of evidence into probability judgments and to utilize actuarial predictions are much

lower than the costs of the same training would be for patients. The training would have limited

future applications for the patient, but would consider serving doctors throughout their careers. In

addition, the aggregate cost of training doctors would be lower than training patients.

Patients should benefit from evidence-based medicine but they should not have to navigate a

complex set of data that would be very difficult for them to interpret. Ideally, doctors would engage

in a variant on what Patterson calls “shared decision making.” Though patients would not be

responsible for evaluating evidence, patients could collaborate with their doctors so that doctors
could guide the decision-making process according to patients’ preferences for risk, tolerance for

aggressive measures and any other relevant preferences. This system would bring together doctors

armed with evidence-based knowledge and training in decision making analysis and patients who

are actively engage in guiding decisions according to their preferences.


i
Bayesian updating requires the use of Bayes’s theorem, given below.
Bayes’s Theorem: p(H/D) = p(D/H) ∙ p(H)
----------------------------------------
p(D/H) ∙ p(H) + p(D/~H) ∙ p(~H)
where H represents the probability that the hypothesis is true and D represents evidence that the hypothesis is true
and ~H represents the probability that the hypothesis is not true
p(H/D) is the probability that the hypothesis is true contingent on evidence that the hypothesis is true
p(D/H) is the probability of evidence that the hypothesis is true contingent on the hypothesis being true (hit rate, or true
positive rate)
p(H) is the probability that the hypothesis is true
p(D/~H) ) is the probability of evidence that the hypothesis is true contingent on the hypothesis being not true (false positive
rate)
p(~H) is the probability that the hypothesis is not true