Assessing the Safety of Electronic Health Records: What Have We Learned?
The percentage of physicians and hospitals using electronic health records (EHRs) is now well over 90%.(1) While this transformation is generally good news, it has been characterized by many unintended, and unexpected, consequences.(2) Over the past decade or so, we have conducted several studies at the intersection of EHR implementation and safety (3) to help understand and address some of these unintended consequences and their impact on patient safety. We used both quantitative and qualitative methods, including multiple rapid assessment techniques such as interviews and site visits (4), to focus on key EHR-related activities. Our topics have included: (i) exploration of the processes involved in identifying and communicating abnormal laboratory test results (5) and referrals (6); (ii) development and implementation of service-oriented clinical decision support (CDS) (7); (iii) identification of recommended practices for ensuring safe and effective EHR implementation and use (8); and (iv) roles of organizations and people responsible for designing, developing, implementing, and evaluating EHRs, including a large multicenter project on CDS.(9)
We observed that EHR-related safety issues can be generally categorized into five types (Table). The best approach to both understanding and resolving them is sociotechnical, which includes not just fixing technical issues such as hardware or software issues but also clinical workflow, policies, and procedures.(10) In this perspective, we highlight four key lessons drawn from our work that we believe are useful for clinicians and health care organizations that seek to identify, prevent, and mitigate EHR-related safety issues.
Lesson One: EHR Safety Requires "Requisite Imagination"
Requisite imagination is the "ability to imagine key aspects of the future we are planning" and foresee potential traps.(11) We did not fully anticipate what might go wrong with EHR implementations or invest enough in testing for problems while developing or implementing EHRs. Thus, unforeseen events happened more frequently and were more severe than anyone predicted.(12) Over the years we have seen unexpected safety consequences related to large-scale ransomware attacks (13), lack of contingency planning, routine EHR upgrades, system-to-system interface changes, staff vacations or retirements, organizational leadership changes, CDS failures (14), and hardware failures (15), to name just a few. Failure to adequately prepare for EHR-related safety events is a surefire path to patient harm. Toward that end, we developed the Office of the National Coordinator–approved SAFER (System Assurance Factors for EHR Resilience) Guides: nine proactive EHR risk-assessment guides focusing on key clinical processes (e.g., computer-based order entry with CDS and abnormal test result reporting) as well as technical concerns (e.g., contingency planning for downtime and system configuration).(16)
Lesson Two: EHR Safety Requires More Than Just Error Reporting
Waiting for user reports of errors is insufficient to understand the range of safety problems and identify potential solutions. User reports generally identify the most egregious and obvious events (e.g., large-scale system outages due to human errors or malicious activities). Far more events occur due to user errors or unanticipated actions associated with poorly designed user interfaces; changes in default settings or underlying information resources that alter system functionality unexpectedly after a routine system upgrade; and inadvertent interactions between otherwise correctly operating components of a complex, tightly coupled, distributed computing system.(17-19)
Several strategies are available that can identify these insidious patient safety issues. Unfortunately, these strategies are infrequently used. The first involves testing of individual components in isolation (e.g., an alert generated for a 10-fold opioid overdose) followed by end-to-end testing of the complete system (e.g., the same prescription entered by the physician received unaltered in the pharmacy) before final deployment. After implementation, periodic testing using test patients in the live EHR is necessary.(20) The second strategy is monitoring user activity logs (21), system error logs, and system performance to identify anomalous behaviors, such as CDS interventions that stop working due to changes in internal representation of a particular data item.(18) The third involves use of computer-based triggers, or indications stored in the database that an error might have occurred, to increase the yield of manual chart reviews.(22) Prevention of these errors requires concerted effort by system developers and those responsible for its configuration who can ensure the system is failsafe and is designed and developed to withstand unanticipated user and system actions. All organizations must take steps to implement measurement systems using these three strategies and closely monitor their systems' performance proactively.(8) A recent report by the National Quality Forum that highlighted high-priority measurement concepts related to health IT safety is a good start.(23)
Lesson Three: EHR Safety Requires Leadership, Resources, and Time Investments
Safety in EHR design, development, configuration, testing, and monitoring requires commitment from leadership for several reasons. First, few health care organizations have personnel with the time or requisite skills (e.g., human factors engineers, applied clinical informaticians, patient safety experts) to carry out the work required to identify potential safety issues and then design, build, implement, and test failsafe solutions to potential problems. Leadership needs to provide the human resources required to do this work. Second, many safety-focused solutions have the short-term effect of reducing clinician efficiency since they, by design, restrict unsafe practices (e.g., ability to review multiple patient charts on a single computer). Leadership needs to expect and tolerate losses in efficiency that may result from such limitations. Third, safety improvements often require additional expenditures for purchasing and maintaining high-quality, state-of-the-art equipment, along with appropriate redundancies.
Of course, EHR-related patient safety exists as a subset of an organization's overall safety programs, and it depends on the same cultural and programmatic environment. As such, another key facet of this work is for organizational leadership to create the blame-free culture that will allow anyone within the organization to bring up potential EHR-related safety issues that need to be addressed.
Lesson Four: EHR Safety Requires Shared Responsibility Between EHR Vendors, Government Regulators, Health Care Organizations, and Users
The responsibility for designing, developing, paying for, and carrying out these policies, procedures, and actions must be shared between EHR developers, health care organizations, system users, and government regulators.(24) We have observed that EHR safety can be compromised by several factors, including: (i) poor design, development, and configuration of the EHR, leading to errors in its software; (ii) incorrect or incomplete use of EHR technology by those within the health care organization; and (iii) lack of processes to monitor and improve the EHR.(25) Assuming that responsibility to address specific safety concerns lies only with the EHR developers (who are unlikely to appreciate local conditions, including complex system interactions or practice patterns) or only with the health care organization (which has little control over how the system was designed and built) is not appropriate. The party in the best position to control the specific safety concern is generally in the best position to address poor performance.(26)
In summary, assessing and improving the safety of EHRs is a never-ending task, and it requires foresight, measurement, commitment, resources, and shared responsibility. Over the next decade, we expect to see increasing emphasis on EHR safety assessment as EHRs become increasingly ubiquitous, and increasingly intertwined with all health care activities. We hope that the lessons above will help us take full advantage of the enormous potential of EHRs to improve care while mitigating some of the potential harms from these powerful technologies.
Dean F. Sittig, PhD Christopher Sarofim Family Professor of Biomedical Informatics and Bioengineering The University of Texas Health Science Center at Houston, School of Biomedical Informatics UT-Memorial Hermann Center for Healthcare Quality & Safety Houston, Texas
Hardeep Singh, MD, MPH Chief, Health Policy, Quality and Informatics Program Center for Innovations in Quality, Effectiveness and Safety Michael E. DeBakey Veterans Affairs Medical Center and Baylor College of Medicine Houston, Texas
1. 2016 Report to Congress on Health IT Progress: Examining the HITECH Era and the Future of Health IT. Office of the National Coordinator for Health Information Technology (ONC) Office of the Secretary. Washington, DC: United States Department of Health and Human Services; November 2016. [Available at]
3. Sittig DF, Ash JS. Clinical Information Systems: Overcoming Adverse Consequences. Sudbury, MA: Jones & Bartlett; 2009. ISBN: 9780763757649.
4. McMullen CK, Ash JS, Sittig DF, et al. Rapid assessment of clinical information systems in the healthcare setting: an efficient method for time-pressed evaluation. Methods Inf Med. 2011;50:299-307. [go to PubMed]
7. Wright A, Sittig DF, Ash JS, et al. Lessons learned from implementing service-oriented clinical decision support at four sites: a qualitative study. Int J Med Inform. 2015;84:901-911. [go to PubMed]
9. Wright A, Ash JS, Erickson JL, et al. A qualitative study of the activities performed by people involved in clinical decision support: recommended practices for success. J Am Med Inform Assoc. 2014;21:464-472. [go to PubMed]
11. Westrum R, Adamski AJ. Requisite imagination: the fine art of anticipating what might go wrong. In: Hollnagel E, ed. Handbook of Cognitive Task Design. Boca Raton, FL: CRC Press; 2003; 193-220. ISBN: 9780805840032. [Available at]
18. Singh H, Wilson L, Petersen LA, et al. Improving follow-up of abnormal cancer screens using electronic health records: trust but verify test result communication. BMC Med Inform Decis Mak. 2009;9:49. [Available at]
19. Schreiber R, Sittig DF, Ash J, Wright A. Orders on file but no labs drawn: investigation of machine and human errors caused by an interface idiosyncrasy. J Am Med Inform Assoc. 2017;24:958-963. [go to PubMed]
20. Wright A, Aaron S, Sittig DF. Testing electronic health records in the "production" environment: an essential step in the journey to a safe and effective health care system. J Am Med Inform Assoc. 2017;24:188-192. [go to PubMed]
29. Koppel R, Wetterneck T, Telles JL, Karsh BT. Workarounds to barcode medication administration systems: their occurrences, causes, and threats to patient safety. J Am Med Inform Assoc. 2008;15:408-423. [go to PubMed]
30. Spencer DC, Leininger A, Daniels R, Granko RP, Coeytaux RR. Effect of a computerized prescriber-order-entry system on reported medication errors. Am J Health Syst Pharm. 2005;62:416-419. [go to PubMed]
31. Perrow C. Normal Accidents: Living with High-Risk Technologies. New York, NY: Basic Books; 1984. ISBN: 978-0691004129.
32. Bobb A, Gleason K, Husch M, Feinglass J, Yarnold PR, Noskin GA. The epidemiology of prescribing errors: the potential impact of computerized prescriber order entry. Arch Intern Med. 2004;164:785-792. [go to PubMed]
Table. Five Types of EHR Safety Issues That Warrant Assessment (Adapted from .)
|EHR failure mode||Example|
|Health IT fails during use or is otherwise not working as designed.(15)||Network problem prevented remote allergy checking from working correctly.(27)|
|Health IT is working as designed, but the design does not meet the user's needs or expectations (i.e., bad design).(17)||A weight-based dosing algorithm coupled with a "mode" error causes clinician to enter order for 38-fold overdose of medication.(28)|
|Health IT is well-designed and working correctly, but was not configured, implemented, or used in a way anticipated or planned for by system designers and developers.(29)||Barcode scanner attached to mobile computer cart does not fit into patient room, forcing registered nurse (RN) to scan medication before entering room. Wrong-patient warning cannot be seen by RN in the room following patient scan.(29)|
|Health IT is working as designed and was configured and used correctly, but interacts with external systems (e.g., via hardware or software interfaces) so that data are lost or incorrectly transmitted or displayed.(30,31)||Alert for monitoring thyroid function in patients receiving amiodarone stopped working when an internal identifier for amiodarone was changed in an external system.(14)|
|Specific health IT safety features or functions were not implemented or not available.(32)||Hospital without an up-to-date, comprehensive backup of their data and system configuration suffers ransomware attack.(33)|