The Emergence of the Trigger Tool as the Premier Measurement Strategy for Patient Safety
In the landmark 1999 report, To Err is Human: Building a Safer Health System, the Institute of Medicine estimated that avoidable medical errors contribute to 44,000–98,000 deaths, and more than a million injuries, annually in United States hospitals.(1) In response to these disturbing data, accreditation bodies, payers, nonprofit organizations, governments, and hospitals launched major initiatives and invested considerable resources to improve patient safety.(2-3) Assessing the impact of these patient safety initiatives requires generally accepted, rigorous, standardized, and practical measures of adverse events.(4-5)
A number of approaches to measuring adverse event rates have been used, including voluntary reports ("incident" or "occurrence" reports), mining of administrative databases (most notably the Agency for Healthcare Research and Quality's [AHRQ] Patient Safety Indicators), the two stage review process used in the Harvard Medical Practice Study, and the Institute for Healthcare Improvement's (IHI) "trigger tool" approach.(6-7) Each of these methods has advantages and limitations (Table). By identifying clues that guide chart reviewers to specific events during a patient's hospitalization more likely to contain an adverse event, the trigger tool approach provides an efficient variation on retrospective chart reviews and overcomes many of the limitations of other methods.(7-11) A brief discussion on each of these approaches to patient safety measurement is worth pursuing, as dramatically different adverse event rates are identified depending on the techniques being used to identify and measure harm.
Occurrence reports: The most well known strategy to identify and measure patient safety in U.S. hospitals is the use of occurrence ("incident") reports submitted by caregivers. Although these data are relatively easy and inexpensive to obtain, evidence suggests that occurrence reports are underutilized (12-14) and only identify between 2% and 8% of all adverse events in the inpatient setting.(7,9-10,12) This underutilization results from the fact that occurrence reports are voluntary, time intensive, far more likely to be completed by nurses than physicians (15), and frequently perceived by staff to result in punitive action.(12) While identifying important clues to process flaws, occurrence reports generally identify near misses and sentinel events but rarely reflect the spectrum of adverse events.(16-18)
Administrative data sets: Approaches to measuring patient safety using administrative data sets are appealing, as these data are often routinely available, inexpensive to obtain, and are immediately comparable across sites. However, administrative data sets, which are the source of adverse event rates identified by AHRQ's Patient Safety Indicators (19), are highly susceptible to variation in coding practices and suffer from harms being easily hidden in the medical record. The end result is that present approaches to identify adverse events using administrative data sets have limited sensitivity and specificity, and should probably only be used to help hospitals prioritize chart review and improvement initiatives.(7,20-21)
Retrospective or concurrent chart review: The Harvard Medical Practice Study used retrospective chart review to uncover adverse events.(22) Another influential study identified adverse events using a combination of "voluntary and verbally solicited reports from house officers, nurses, and pharmacists; and by medication order sheet, medication administration record, and chart review of all hospitalized patients."(17) Several other significant safety studies used similar methods. The most frequently cited adult studies using a retrospective methodology (22-23) revealed adverse event rates of 3.7 and 2.9 per 100 admissions, respectively. This identification strategy suffers from several problems: inconsistency in defining adverse events; poor, incomplete, confusing, or conflicting entries in the medical records; and resource intensiveness. This methodology was valuable in the early days of the patient safety field by highlighting the major patient safety risks present in inpatient health care settings. However, it has largely been replaced by the more efficient and more sensitive trigger tool method described below.(7)
Trigger-based chart review: The trigger tool methodology has emerged as the premier approach for adverse event detection.(7,24-25) Triggers, defined as "occurrences, prompts, or flags found on review of the medical record that 'trigger' further investigation to determine the presence or absence of an adverse event" (26), have been shown to more efficiently identify adverse events than any other published detection method.(7,9-10,12-13,25-26) Recent studies using the IHI Global Trigger Tool (27) have identified harm rates in adults in U.S. hospitals of 49 per 100 admissions (33% of patients) (7), 36 per 100 admissions (28% of patients) in Medicare patients (25), and 25 per 100 admissions (18% of patients) across North Carolina.(24) Between 44% and 63% of these adverse events were interpreted as preventable. Examples of triggers include abnormal laboratory results such as rising creatinine, prescriptions for antidote medications such as naloxone, and other medical record–based hints that tell the chart reviewer that an adverse event might have occurred, triggering a more thorough review of the medical record.(23) The IHI adult Global Trigger Tool (27), the most well studied of the published trigger tools, consistently demonstrates compelling operator characteristics, including excellent inter- and intra-rater reliability, very good to excellent sensitivity, and excellent specificity when compared with the gold standard of detailed expert chart review.(7,11,18)
A 2011 study by Classen and colleagues highlighted the relative test characteristics of the various adverse event detection methods.(7) The authors reviewed 795 closed medical records from 3 large academic medical centers and found that the IHI Global Trigger Tool identified 354 of the 393 adverse events (90%) detected by expert chart review, while the AHRQ Patient Safety Indicators (derived from an algorithm applied to administrative data) identified 35 adverse events (9%), and occurrence reports identified only 4 adverse events (1%). Other studies have demonstrated similar findings.(9-10,13,28)
In summary, rates of harm in U.S. hospitals remain unacceptably high, with little evidence of significant improvement since To Err is Human was published in 1999.(4,7,24-25) One major reason for these persistently high rates has been the lack of an accepted, rigorous, standardized, and practical approach to measuring and tracking adverse events over time. The IHI Global Trigger Tool, along with other more patient population–specific triggers tools, was developed to provide practical and reliable measurement approaches to track rates of harm over time (7,24-25,27) at the local, regional, and national level. Although not perfect, trigger tools have better operator characteristics than other measurement approaches and detect significantly more adverse events than occurrence reports, administrative database–derived harm rates, and concurrent or retrospective chart review.(29) Present efforts are under way to automate the IHI adult Global Trigger Tool and to construct and automate a pediatric global trigger tool. Once these two automated global trigger tools are validated, it seems likely that the Centers for Medicaid and Medicare Services (CMS) will require hospitals to report "all cause" harm rates and perhaps report such results publicly or tie them to reimbursement. Other public and private insurance companies are sure to follow. These will be important next steps to move U.S. hospitals forward toward the real work at hand—reliably improving the safety of patients in our health care system.
Paul J. Sharek, MD, MPH
Associate Professor of Pediatrics, Stanford University School of Medicine
Medical Director, Center for Quality and Clinical Effectiveness
Chief Clinical Patient Safety Officer, Lucile Packard Children's Hospital
1. Kohn LT, Corrigan JM, Donaldson MS, eds. To Err Is Human: Building a Safer Health System. Washington, DC: Committee on Quality of Health Care in America, Institute of Medicine, National Academies Press; 2000. ISBN: 9780309068376.
7. Classen DC, Resar R, Griffin F, et al. 'Global Trigger Tool' shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30:581-589. [go to PubMed]
9. Sharek PJ, Horbar JD, Mason W, et al. Adverse events in the neonatal intensive care unit: development, testing, and findings of an NICU-focused trigger tool to identify harm in North American NICUs. Pediatrics. 2006;118:1332-1340. [go to PubMed]
10. Takata GS, Mason W, Taketomo C, Logsdon T, Sharek PJ. Development, testing, and findings of a pediatric-focused trigger tool to identify medication-related harm in US children's hospitals. Pediatrics. 2008;121:e927-e935. [go to PubMed]
20. West AN, Weeks WB, Bagian JP. Rare adverse medical events in VA inpatient care: reliability limits to using Patient Safety Indicators as performance measures. Health Serv Res. 2008;43(1 Pt 1):249-266. [go to PubMed]
22. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324:370-376. [go to PubMed]
25. Levinson DR. Adverse Events in Hospitals: National Incidence Among Medicare Beneficiaries. Washington, DC: US Department of Health and Human Services, Office of the Inspector General; November 2010. Report No. OEI-06-09-00090. [Available at]
26. Classen DC, Pestotnik SL, Evans RS, Lloyd JF, Burke JP. Adverse drug events in hospitalized patients. Excess length of stay, extra costs, and attributable mortality. JAMA. 1997;277:301-306. [go to PubMed]
27. Griffin FA, Resar RK. IHI Global Trigger Tool for Measuring Adverse Events: IHI Innovation Series white paper. Cambridge, MA: Institute for Healthcare Improvement; 2007.
Table. Comparison of four most frequently used methods to identify harm.
|Harm Detection Method||Advantages||Limitations|
|Incident (occurrence) reports||
• Well established process in most hospitals
• Identifies only between 2% and 8% of harmful events
|Administrative database algorithms||
• Standard definitions
• Identifies less than 10% of all harms (7)
|Retrospective/Concurrent Chart Review (from Harvard Medical Practice Study) (22)||
• Active surveillance can identify harms not well articulated in chart (if honest communication occurs)
• Substantially underreported harm rates (3,13)
• Measures "all cause" harm
• Requires training