Sorry, you need to enable JavaScript to visit this website.
Skip to main content

From Possible to Probable to Sure to Wrong—Premature Closure and Anchoring in a Complicated Case

Save
Print
David E. Newman-Toker, MD, PhD | April 1, 2013
View more articles from the same authors.

The Case

A previously healthy 44-year-old man was admitted to the hospital with a 2-day history of headache and word-finding difficulties. Neurological examination was normal but computed tomography (CT) and magnetic resonance imaging (MRI) of the head revealed parietal and frontal masses concerning for malignancy or infection. Biopsy revealed evidence of vasculitis. Consultation with infectious disease, rheumatology, and neurology led to a provisional diagnosis of primary central nervous system (CNS) vasculitis. The patient was started on steroid and cyclophosphamide therapy and discharged after improvement in his symptoms.

Over the next month, the patient continued to feel well without a recurrence of his symptoms; however, serial brain MRI showed progression of the patient's lesions. Given his symptomatic improvement, the steroids were slowly tapered. A repeat MRI continued to show progression. Four months after the initial presentation, he re-presented to the emergency department after developing receptive and expressive aphasia and disorientation. Imaging again revealed evidence of worsening lesions and repeat biopsy showed glioblastoma multiforme. The patient underwent resection and adjuvant chemotherapy followed by rapid clinical decline.

Subsequent review of the case, including the clinical documentation, noted that the provisional diagnosis of CNS vasculitis at the time of hospital discharge gradually morphed into a certain diagnosis of CNS vasculitis during the ensuing outpatient follow-up, in both the minds of the clinicians and the chart documentation. As a result, the diagnosis was not revisited, even in the presence of contradictory data, leading to prolonged inappropriate therapy and a delay in the correct diagnosis and treatment.

The Commentary

Diagnostic errors have gained recent visibility as a patient safety and public health priority.(1) Unlike therapeutic errors, errors in diagnosis are notoriously difficult to measure.(2) The relatively long time lag between error and detection (compared with wrong-site surgery), poor clinical documentation of key diagnostic details and the diagnostic reasoning processes (compared with medication errors), and lack of consensus about correct diagnostic process (compared with violations of therapeutic guidelines) all make retrospective assessments of "diagnosability" or "preventability" of harm subject to substantial risks of hindsight and outcome biases.(3) Nevertheless, it is generally believed that diagnostic errors are under-recognized (4), under-reported (5), and under-appreciated.(6)

Neither autopsy-based studies (7) nor malpractice cases (8) provide a complete accounting of diagnostic errors.(2) Newer methods based on electronic health record trigger tools to screen for potential errors (9) may hold some promise, but more research on measurement is needed.(2) Measurement issues notwithstanding, diagnostic errors seem to be common. Typical estimates place the overall diagnostic error rate in clinical practice in the range of 10%–15%.(4,10) Diagnostic error rates appear to vary by specialty (e.g., 5% in radiology vs. 12% in emergency medicine) (4), by condition (e.g., 2% of myocardial infarctions vs. 9% of strokes) (11), and especially by clinical presenting features (e.g., 4% of strokes presenting traditional symptoms vs. 64% of strokes presenting non-traditional symptoms) (12), with atypical and non-specific presentations increasing the risk of misdiagnosis substantially.(11,13) Diagnostic errors, particularly those that result from physician negligence, lead to serious disability or death roughly half the time.(14) Misdiagnoses are estimated to account for approximately 40,000–80,000 preventable hospital deaths each year in the United States through missed opportunities to apply prompt, correct treatment or through the application of incorrect treatments.(1) This estimate accounts neither for deaths from non-hospital errors in ambulatory care settings, nor for any errors resulting in non-lethal morbidity.(8) The most recent estimates suggest the aggregate figure may be 150,000 or more patients per year experiencing serious, readily-preventable, misdiagnosis-related harm in the US (2), so there is substantial room for performance improvement.

This case of a man with new headache illustrates some fundamental challenges faced by physicians when they attempt to diagnose the underlying cause of a new symptom. Diagnoses are often made under time pressure with incomplete information and almost always involve some degree of uncertainty. When a 44-year-old man presents with 2 days of a new headache, by far the most common cause is a benign primary headache disorder, such as migraine or tension-type headache. Accompanying neurological symptoms, including word-finding difficulties as reported here, are generally "red flags" in headache patients, but normal neurological examination findings, also reported here, are "green flags." Many patients with migraine report mild word-finding difficulties during an acute attack, and even severe language disturbances, such as frank aphasia, can result from migraine.(15)

So even in the first stage of the case, the physician is faced with a complex set of probabilistic judgments: (i) what is the prevalence of new migraine in a 44-year-old man?; (ii) how likely is it that the word-finding difficulty is a "real" neurological symptom versus merely an "overcall" in an attentive or anxious patient?; (iii) if it is "real," how much does that influence the likelihood the headache is benign versus dangerous?; (iv) does the otherwise normal neurologic examination trump the word-finding difficulty or the reverse?; (v) if the former, how confident am I that my neurological exam skills would have identified important, subtle findings?; (vi) to what extent would additional diagnostic tests help resolve this question?; (vii) how large is the risk of taking a watchful waiting strategy (i.e., what is the short-term, weighted average prognosis for every potentially dangerous condition on the list of differential diagnostic considerations)?; (viii) do the potential benefits of additional tests outweigh the potential risks in toxicity from those tests (e.g., computed tomography [CT] radiation, contrast nephropathy) or any subsequent tests (e.g., brain biopsy) resulting from false-positive ("incidentaloma") findings on the first test? With such a high degree of complexity and uncertainty, it should probably be considered an achievement that physicians ever outperform chance in diagnostic decision-making. Although numerous systems and cognitive factors may predispose to misdiagnosis (10), diagnostic errors are most often linked to bedside mistakes in history elicitation, physical examination, test ordering, or test interpretation.(2,16) Some key contributing causes are listed in Table 1.

In this case, the "word-finding difficulties" should (and apparently did) prompt clinicians to consider alternate diagnoses such as transient ischemic attack or stroke, other structural brain lesions (e.g., neoplasm, demyelination), or seizures. Neuroimaging was the correct choice of subsequent diagnostic tests, but the CT scan of the brain was probably inappropriate, with risks exceeding benefits. The sensitivity of CT for the major potentially causal lesions (e.g., ischemic stroke) is too low (e.g., 16% [17]) to serve as a screening gatekeeper—accordingly, a negative CT here should always be followed by a magnetic resonance imaging (MRI). On the other hand, if a lesion is found by CT, an MRI to better define the anatomy and pathology of the lesion will invariably follow before considering a biopsy. There are few potential lesions in this case where the CT could offer complementary information to MRI—it is simply a lower-quality version of the same test. An argument to support CT use here is that even if it will not alter subsequent testing (i.e., MRI will be obtained, regardless), CT could be the only option to rule out imminent intracranial threats if MRI is not available immediately. However, given the normal neurological exam (including normal mental state), it was highly improbable that the patient had a large structural lesion which could prove imminently lethal (e.g., large mass or obstructive hydrocephalus at risk of brain herniation). Diagnostic test overuse, especially of advanced imaging, is an increasingly pressing problem in the context of rising US health care costs.(18) Eliminating redundant testing, as in this case with CT, is one way to improve quality and simultaneously reduce costs.

Biopsy was probably the correct next diagnostic test if the lesion by MRI looked clearly like a malignancy or focal infectious encephalitis, although about 5% of biopsies lead to unanticipated stroke or inflammatory diagnoses (19) that could have been disclosed through watchful waiting and repeat MRI, without the need for an intracranial procedure. Vasculitis was a parsimonious explanation for all of the patient's recent symptoms, so it was reasonable to choose as a working diagnosis, regardless of the level of certainty expressed in the pathologist's report ("possible"/"probable"/"definite"). Because vasculitis confers a significant risk of subsequent stroke, initiating empiric therapy was also reasonable, even if the pathologic diagnosis was provisional. However, the apparent response to therapy with steroids and cyclophosphamide may have been overvalued by the team. It is psychologically unavoidable (and often reasonable) for physicians to view response to treatment as one more diagnostic "test" that helps confirm the working diagnosis. Unfortunately, physicians rarely temper their interpretations by considering the sensitivity and specificity of this test. If the treatment effect is insensitive (i.e., sub-optimally effective, with many true cases having a partial or no response to treatment), then the absence of a therapeutic response implies very little about the incorrectness of the working diagnosis. Conversely, if the treatment effect is non-specific vis-à-vis other diseases on the differential diagnosis, as in this case, then the presence of a therapeutic response implies very little about the correctness of the working diagnosis. Steroids, in particular, yield symptomatic improvement in many conditions, including brain tumors with surrounding edema (the correct diagnosis in this case) or steroid-responsive brain neoplasms (e.g., lymphoma). After treatment, the team prematurely closed on the vasculitis diagnosis and anchored, despite mounting evidence against vasculitis from follow-up MRIs obtained prior to the patient's second symptomatic decline.

The relationship between diagnosis and treatment is complex. Although we often think of them as separable, these two processes are usually intertwined. Outside of academic circles, the act of medical diagnosis is not particularly valuable as an end unto itself, so diagnosis might be better thought of as management-informed diagnostic decision-making. Labeling the patient with a diagnosis is really a surrogate for offering a natural history prognosis and an anticipated response to a particular set of available treatment interventions. Diagnosis is therefore most valuable when resolving questions about the correct diagnosis will substantially alter (therapeutic) management in a way that improves health outcomes for a patient. Absent effective treatment options, however, confirming a diagnosis may still offer psychological or practical benefit for the patient and/or physician. For example, in this case, it is known that, despite maximal medical therapy, more than 95% of patients die within 3 years of a diagnosis of glioblastoma multiforme.(20) Knowing the lethal diagnosis, the patient might choose to spend more time with his family in the coming year than he might otherwise, had he continued to believe his diagnosis was poorly controlled vasculitis.

Despite this close, intermingled relationship, physicians often distinguish sharply between the diagnostic mode of thinking (where diagnostic uncertainty is acknowledged, but therapeutic uncertainty is ignored pending a confirmed diagnosis) and the therapeutic mode of thinking (where diagnostic uncertainty is unacknowledged, and only therapeutic uncertainty is considered). It is common to hear physicians give patients advice such as, "First, we need the diagnosis. Then we can talk about treatment." Physicians usually exist mentally in diagnostic mode or therapeutic mode, but not both simultaneously.(21) This cognitive firewall between diagnosis and treatment reasoning exists at least partly for practical reasons—our knowledge of the relationships between presenting symptoms and treatment responsiveness is hinged to a particular diagnosis, as is practically the entire scientific evidence base of modern medicine (other than in cases of empiric or symptomatic treatment without a diagnosis). This sharp separation may have a down side, however, since it may make it more difficult to extract ourselves from the therapeutic mode once it is entered. This may contribute to insistence on a premature initial diagnosis despite disconfirming evidence (anchoring), as occurred in this case.

Human reasoning is subject to computational limitations and biasing factors that cause specific types of cognitive distortion (Table 2). Among these, premature closure is one of the most common causes of diagnostic error.(22) Escaping premature closure after it occurs is challenging. Some have advocated for the use of generic metacognitive strategies (e.g., knowledge of error theory, familiarity with major types of heuristics and biases, including premature closure) to improve our overall self-monitoring functions so that we might recognize when we are at risk for being (or have already become) cognitively trapped.(23) This approach must usually be combined with more specific red flag triggers, such as failure to respond to initial treatment or repeat visits for the same complaint (22), or well-recognized clinical pitfalls scenario.(23) Neither of these approaches to catching premature closure, however, would have altered management or outcomes for this particular patient. More promising would be approaches designed to proactively prevent premature closure in the first place, such as taking a diagnostic time out (a deliberate pause to reassess the working diagnosis before further action is taken—e.g., "why can't this be something else?").(22) In this case, there was also an interaction between individual physician premature closure and team communication failure—the pathologist's diagnostic uncertainty was not adequately conveyed to the treating team, and the treating team's uncertainty was not propagated forward during treatment and follow-up. Some authors have advocated for use of the status "not yet diagnosed" to denote cases where the diagnosis remains unclear at the conclusion of an encounter (24); this likely works well when the diagnosis is unknown or largely speculative, but is probably not valuable in a case like this when the working diagnosis has apparently been confirmed by pathology and potentially toxic therapies are already being applied. Along these lines, I would suggest that a generic strategy we should all adopt to try to prevent premature closure would be to grade all diagnoses with a level of certainty. The grading system could be a simple percentage from 1%–99% or an ordinal rating scale with anchors (1%–10% speculative; 11%–25% improbable; 26%–50% possible; 51%–75% likely; 76%–90% probable; 91%–99% definite; >99% confirmed). Providers could track their own certainty in relation to actual patients' diagnoses to improve calibration of their personal estimates over time. Electronic health record systems could perhaps facilitate this with a required "certainty" field, although billing concerns would doubtless intrude.

Most diagnostic errors are ultimately caused by defects in human cognition resulting in clinical reasoning failures, but this does not mean that all solutions must directly address this root cause.(1) Classifying diagnostic errors by clinical context rather than by cognitive bias might offer opportunities to develop systems solutions to cognitive problems (Table 3), much the way computerized ordering of medications can circumvent handwriting errors in drug prescriptions without necessitating handwriting retraining.(1) Stakeholders should prioritize identifying and remedying the most frequent and harmful types of diagnostic error.

Take-Home Points

  • Diagnostic errors are frequent and at least 150,000 patients are estimated to experience preventable harm annually in the US.
  • Misdiagnoses are most often linked to bedside defects in history, physical exam, test ordering, or test interpretation.
  • Root causes are often cognitive biases, and premature closure is one of the most common cognitive predispositions.
  • Before closing on a diagnosis, take a diagnostic time out; consider using "not yet diagnosed" if you are really unsure.
  • Challenge yourself to apply and maintain a certainty rating when recording your patient's diagnoses.

David E. Newman-Toker, MD PhD Associate Professor, Department of Neurology Johns Hopkins Hospital Baltimore, MD

References

1. Newman-Toker DE, Pronovost PJ. Diagnostic errors—the next frontier for patient safety. JAMA. 2009;301:1060-1062. [go to PubMed]

2. Newman-Toker DE, Makary MA. Measuring diagnostic errors in primary care. JAMA Intern Med. 2013;173:425-426. [go to PubMed]

3. Wears RL, Nemeth CP. Replacing hindsight with insight: toward better understanding of diagnostic failures. Ann Emerg Med. 2007;49:206-209. [go to PubMed]

4. Graber M. Diagnostic errors in medicine: a case of neglect. Jt Comm J Qual Patient Saf. 2005;31:106-113. [go to PubMed]

5. Wu AW, Folkman S, McPhee SJ, Lo B. Do house officers learn from their mistakes? Qual Saf Health Care. 2003;12:221-226. [go to PubMed]

6. Wachter RM. Why diagnostic errors don't get any respect—and what can be done about them. Health Aff (Millwood). 2010;29:1605-1610. [go to PubMed]

7. Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA. 2003;289:2849-2856. [go to PubMed]

8. Saber Tehrani AS, Lee H, Mathews SC, Shore A, Makary MA, Pronovost PJ, Newman-Toker DE. 25-Year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22:672-680. [go to PubMed]

9. Singh H, Giardina TD, Meyer AND, Forjuoh SN, Reis MD, Thomas EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med. 2013;173:418-425. [go to PubMed]

10. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165:1493-1499. [go to PubMed]

11. Newman-Toker DE, Robinson KA, Edlow JA. Frontline misdiagnosis of cerebrovascular events in the era of modern neuroimaging: a systematic review [abstract]. Ann Neurol. 2008;64(Suppl 12):S17-S18.

12. Lever NM, Nyström KV, Schindler JL, Halliday J, Wira C III, Funk M. Missed opportunities for recognition of ischemic stroke in the emergency department. J Emerg Nurs. 2013;39:434-439. [go to PubMed]

13. Kostopoulou O, Delaney BC, Munro CW. Diagnostic difficulty and error in primary care—a systematic review. Fam Pract. 2008;25:400-413. [go to PubMed]

14. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324:377-384. [go to PubMed]

15. Mishra NK, Rossetti AO, Ménétrey A, Carota A. Recurrent Wernicke's aphasia: migraine and not stroke! Headache. 2009;49:765-768. [go to PubMed]

16. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169:1881-1887. [go to PubMed]

17. Chalela JA, Kidwell CS, Nentwich LM, et al. Magnetic resonance imaging and computed tomography in emergency assessment of patients with suspected acute stroke: a prospective comparison. Lancet. 2007;369:293-298. [go to PubMed]

18. Iglehart JK. The new era of medical imaging—progress and pitfalls. N Engl J Med. 2006;354:2822-2828. [go to PubMed]

19. Tilgner J, Herr M, Ostertag C, Volk B. Validation of intraoperative diagnoses using smear preparations from stereotactic brain biopsies: intraoperative versus final diagnosis—influence of clinical factors. Neurosurgery. 2005;56:257-265. [go to PubMed]

20. Krex D, Klink B, Hartmann C, et al. Long-term survival with glioblastoma multiforme. Brain. 2007;130(Pt 10):2596-2606. [go to PubMed]

21. Schattner A, Magazanik N, Haran M. The hazards of diagnosis. QJM. 2010;103:583-587. [go to PubMed]

22. Ely JW, Graber ML, Croskerry P. Checklists to reduce diagnostic errors. Acad Med. 2011;86:307-313. [go to PubMed]

23. Croskerry P. Cognitive forcing strategies in clinical decisionmaking. Ann Emerg Med. 2003;41:110-120. [go to PubMed]

24. Campbell SG. 16 Milestones—10 Years. Can J Diagn. 2003:115-118. [Available at]

Tables

Table 1. Key system and cognitive factors contributing to diagnostic error.

Key System Factors that Contribute to Diagnostic Error Key Cognitive Factors that Contribute to Diagnostic Error
Equipment failures (e.g., instrument miscalibration)
Teamwork/communication/transition-of-care failures
Inefficient testing/referral processes causing diagnostic delays
Ineffective triage/referral to physicians with correct expertise
Faulty knowledge or skills
Faulty data gathering
Faulty information processing
Faulty verification

Table 2. Cognitive Biases and Failed Heuristics. Adapted from (22) with permission from the Association of American Medical Colleges.

Bias or Heuristic Definition
Anchoring The tendency to perceptually lock on to salient features of the patient's presentation too early in the diagnostic process and failing to adjust this impression in light of later information.
Availability The disposition to judge things as being more likely or frequently occurring, if they readily come to mind.
Base-rate neglect The tendency to ignore the true prevalence of a disease, either inflating or reducing its base rate and distorting Bayesian reasoning.
Premature closure The decision-making process ends too soon; the diagnosis is accepted before it had been fully verified. "When the diagnosis is made, the thinking stops."
Representativeness restraint The physician looks for prototypical manifestations of disease (pattern recognition) and fails to consider atypical variants.
Search satisficing The tendency to call off a search once something is found.
Unpacking principle The failure to elicit all relevant information in establishing a differential diagnosis.
Context errors The critical signal is distorted by the background against which it is perceived.

Table 3. Classifying Diagnostic Errors to Facilitate Systems Solutions. Reprinted from (1) with permission from the American Medical Association.

Clinical Scenario Typical Problem Potential System Solutions
Low Technology High Technology
Visual diagnosis (e.g., pathology, radiology) Failed visual pattern recognition Independent second reads Computer-assisted feature matching
Complex hospitalized patient (e.g., critical illness, major trauma) Information overload amid multiple physiologic derangements Structured diagnostic protocols/algorithms Data visualization tools
Routine ambulatory patient (e.g., headache, fatigue) Complacency regarding uncommon dangerous causes "Don't-miss-diagnosis" checklists at sick visits Symptom-oriented diagnostic decision support
Patient with rare symptoms (e.g., hearing their eyes move) or unusual constellations of symptoms Lack of specialized knowledge of rare symptoms/diseases Streamline triage to diagnostic experts Using Internet search engines for information on possible diagnoses
Asymptomatic patient (e.g., routine cancer screening) Forgot to trigger screening protocol Screening checklists at well visits Automated reminders in electronic health records

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources From the Same Author(s)
Related Resources