Sorry, you need to enable JavaScript to visit this website.
Skip to main content
Annual Perspective

Technology as a Tool for Improving Patient Safety

A Jay Holmgren, Susan McBride,Bryan Gale, Sarah Mossburg | March 29, 2023 
View more articles from the same authors.

In the past several decades, technological advances have opened new possibilities for improving patient safety. Using technology to digitize healthcare processes has the potential to increase standardization and efficiency of clinical workflows and to reduce errors and cost across all healthcare settings.1 However, if technological approaches are designed or implemented poorly, the burden on clinicians can increase. For example, overburdened clinicians can experience alert fatigue and fail to respond to notifications. This can lead to more medical errors. As a testament to the significance of this topic in recent years, several government agencies [(e.g. the Agency for Healthcare Research and Quality (AHRQ) and the Centers for Medicare and Medicaid services (CMS)] have developed resources to help healthcare organizations integrate technology, such as the Safety Assurance Factors for EHR Resilience (SAFER) guides developed by the Office of the National Coordinator for Health Information Technology (ONC).2,3,4 However, there is some evidence that these resources have not been widely used.5 Recently, the Centers for Medicare & Medicaid Services (CMS) started requiring hospitals to use the SAFER guides as part of the FY 2022 Hospital Inpatient Prospective Payment Systems (IPPS), which should raise awareness and uptake of the guides.6

During 2022, research into technological approaches was a major theme of articles on PSNet. Researchers reviewed all relevant articles on PSNet and consulted with Dr. A Jay Holmgren, PhD, and Dr. Susan McBride, PhD, subject matter experts in health IT and its role in patient safety. Key topics and themes are highlighted below.  

Clinical Decision Support  

The most prominent focus in the 2022 research on technology, based on the number of articles published on PSNet, was related to clinical decision support (CDS) tools. CDS provides clinicians, patients, and other individuals with relevant data (e.g. patient-specific information), purposefully filtered and delivered through a variety of formats and channels, to improve and enhance care.7  

Computerized Patient Order Entry  

One of the main applications of CDS is in computerized patient order entry (CPOE), which is the process used by clinicians to enter and send treatment instructions via a computer application.8 While the change from paper to electronic order entry itself can reduce errors (e.g., due to unclear handwriting or manual copy errors), research in 2022 showed that there is room for improvement in order entry systems, as well as some promising novel approaches. 

Two studies looked at the frequency of and reasons for medication errors in the absence of CDS and CPOE and demonstrated that there was a clear patient safety need. One study found that most medication errors occurred during the ordering or prescribing stage, and both this study and the other study found that the most common medication error was incorrect dose. Ongoing research, such as the AHRQ Medication Safety Measure Development project, aims to develop and validate measure specifications for wrong-patient, wrong-dose, wrong-medication, wrong-route, and wrong-frequency medication orders within EHR systems, in order to better understand and capture health IT safety events.9 Errors of this type could be avoided or at least reduced through the use of effective CPOE and CDS systems. However, even when CPOE and CDS are in place, errors can still occur and even be caused by the systems themselves. One study reviewed duplicate medication orders and found that 20% of duplicate orders resulted from technological issues, including alerts being overridden, alerts not firing, and automation issues (e.g., prefilled fields). A case study last year Illustrated one of the technological issues, in this case a manual keystroke error, that can lead to a safety event. A pharmacist mistakenly set the start date for a medication to the following year rather than the following day, which the CPOE system failed to flag. The authors recommended various alerts and coding changes in the system to prevent this particular error in the future.  

There were also studies in 2022 that showed successful outcomes of well-implemented CPOE systems. One in-depth pre-post, mixed-methods study showed that a fully implemented CPOE system significantly reduced specific serious and commonly occurring prescribing and procedural errors. The authors also presented evidence that it was cost-effective and detailed implementation lessons learned drawn from the qualitative data collected for the study. A specific CPOE function that demonstrated statistically significant improvement in 2022 was automatic deprescribing of medication orders and communication of the relevant information to pharmacies. Deprescribing is the planned and supervised process of dose reduction or stopping of a medication that is no longer beneficial or could be causing harm. That study showed an immediate and sustained 78% increase in successful discontinuations after implementation of the software. A second study on the same functionality determined that currently only one third to one half of medications are e-prescribed, and the study proposed that e-prescribing should be expanded to increase the impact of the deprescribing software. It should be noted, however, that the systems were not perfect and that a small percentage of medications were unintentionally cancelled. Finally, an algorithm to detect patients in need of follow-up after test results was developed and implemented in another study. The algorithm showed some process improvements, but outcome measures were not reported. 


Usability of CDS systems was a large focus of research in 2022. Poorly designed systems that do not fit into existing workflows lead to frustrated users and increase the potential for errors. For example, if users are required to enter data in multiple places or prompted to enter data that are not available to them, they could find ways to work around the system or even cease to use it, increasing the potential for patient safety errors. The documentation burden is already very high on U.S. clinicians,10 so it is important that novel technological approaches do not add to this burden but, if possible, alleviate it by offering a high level of usability and interoperability.  

One study used human-factored design in creating a CDS to diagnose pulmonary embolism in the Emergency Department and then surveyed clinician users about their experiences using the tool. Despite respondents giving the tool high usability ratings and reporting that the CDS was valuable, actual use of the tool was low. Based on the feedback from users, the authors proposed some changes to increase uptake, but both users and authors mentioned the challenges that arise when trying to change the existing workflow of clinicians without increasing their burden. Another study gathered qualitative feedback from clinicians on a theoretical CDS system for diagnosing neurological issues in the Emergency Department. In this study too, many clinicians saw the potential value in the CDS tool but had concerns about workflow integration and whether it would impact their ability to make clinical decisions. Finally, one study developed a dashboard to display various risk factors for multiple hospital-acquired infections and gathered feedback from users. The users generally found the dashboard useful and easy to learn, and they also provided valuable feedback on color scales, location, and types of data displayed. All of these studies show that attention to end user needs and preferences is necessary for successful implementation of CDS.  However, the recent market consolidation in Electronic Health Record vendors may have an impact on the amount of user feedback gathered and integrated into CDS systems. Larger vendors may have more resources to devote to improving the usability and design of CDS, or their near monopolies in the market may not provide an incentive to innovate further.11 More research is needed as this trend continues.  

Alerts and Alarms 

Alerts and alarms are an important part of most CDS systems, as they can prompt clinicians with important and timely information during the treatment process. However, these alerts and alarms must be accurate and useful to elicit an appropriate response. The tradeoff between increased safety due to alerts and clinician alert fatigue is an important balance to strike.12

Many studies in 2022 looked at clinician responses to medication-related alerts, including override and modification rates. Several of the studies found a high alert override rate but questioned the validity of using override rates alone as a marker of CDS effectiveness and usability. For example, one study looked at drug allergy alerts and found that although 44.8% of alerts were overridden, only 9.3% of those were inappropriately overridden, and very few overrides led to an adverse allergic reaction. A study on “do not give” alerts found that clinicians modified their orders to comply with alert recommendations after 78% of alerts but only cancelled orders after 26% of alerts. A scoping review looked at drug-drug interaction alerts and found similar results, including high override rates and the need for more data on why alerts are overridden. These findings are supported by another study that found that the underlying drug value sets triggering drug-drug interaction alerts are often inconsistent, leading to many inappropriate alerts that are then appropriately overridden by clinicians. These studies suggest that while a certain number of overrides should be expected, the underlying criteria for alert systems should be designed and regularly reviewed with specificity and sensitivity in mind. This will increase the frequency of appropriate alerts that foster indicated clinical action and reduce alert fatigue. 

There also seems to be variability in the effectiveness of alert systems across sites. One study looked at an alert to add an item to the problem list if a clinician placed an order for a medication that was not indicated based on the patient’s chart. The study found about 90% accuracy in alerts across two sites but a wide difference in the frequency of appropriate action between the sites (83% and 47%). This suggests that contextual factors at each site, such as culture and organizational processes, may impact success as much as the technology itself.  

A different study looked at the psychology of dismissing alerts using log data and found that dismissing alerts becomes habitual and that the habit is self-reinforcing over time. Furthermore, nearly three quarters of alerts were dismissed within 3 seconds. This indicates how challenging it can be to change or disrupt alert habits once they are formed. 

Artificial Intelligence and Machine Learning  

In recent years, one of the largest areas of burgeoning technology in healthcare has been artificial intelligence (AI) and machine learning. AI and machine learning use algorithms to absorb large amounts of historical and real-time data and then predict outcomes and recommend treatment options as new data are entered by clinicians. Research in 2022 showed that these techniques are starting to be integrated into EHR and CDS systems, but challenges remain. A full discussion of this topic is beyond the scope of this review. Here we limit the discussion to several patient-safety-focused resources posted on PSNet in 2022.  

One of the promising aspects of AI is its ability to improve CDS processes and clinician workflow overall. For example, one study last year looked at using machine learning to improve and filter CDS alerts. They found that the software could reduce alert volume by 54% while maintaining high precision. Reducing alert volume has the potential to alleviate alert fatigue and habitual overriding. Another topic explored in a scoping review was the use of AI to reduce adverse drug events. While only a few studies reviewed implementation in a clinical setting (most evaluated algorithm technical performance), several promising uses were found for AI systems that predict risk of an adverse drug event, which would facilitate early detection and mitigate negative effects.  

Despite enthusiasm for and promising applications of AI, implementation is slow. One of the challenges facing implementation is the variable quality of the systems. For example, a commonly used sepsis detection model was recently found to have very low sensitivity.13 Algorithms also drift over time as new data are integrated, and this can affect performance, particularly during and after large disturbances like the COVID-19 pandemic.14 There is also emerging research about the impact of AI algorithms on racial and ethnic biases in healthcare; at the time of publication of this essay, an AHRQ EPC was conducting a review of evidence on the topic.15 These examples highlight the fact that AI is not a “set it and forget it” application; it requires monitoring and customization from a dedicated resource to ensure that the algorithms perform well over time. A related challenge is the lack of a strong business case for using high-quality AI. Because of this, many health systems choose to use out-of-the-box AI algorithms, which may be of poor quality overall (or are unsuited to particular settings) and may also be “black box” algorithms (i.e., not customizable by the health system because the vendor will not allow access to the underlying code).16 The variable quality and the lack of transparency may cause mistrust by clinicians and overall aversion to AI interventions.  

In an attempt to address these concerns, one article in 2022 detailed best practices for AI implementation in health systems, focusing on the business case. Best practices include using AI to address a priority problem for the health system rather than treating it as an end itself. Additionally, testing the AI using the health system’s patients and data to demonstrate applicability and accuracy for that setting, confirming that the AI can provide a return on investment, and ensuring that the AI can be implemented easily and efficiently are also important. Another white paper described a human-factors and ergonomics framework for developing AI in order to improve the implementation within healthcare systems, teams, and workflows. The federal government and international organizations have also published AI guidelines, focusing on increasing trustworthiness (National Artificial Intelligence Initiative)17 and ensuring ethical governance (World Health Organization).18  

Conclusion and Next Steps 

As highlighted in this review, the scope and complexity of technology and its application in healthcare can be intimidating for healthcare systems to approach and implement. Researchers last year thus created a framework that health systems can use to assess their digital maturity and guide their plans for further integration.  

The field would benefit from more research in several areas in upcoming years. First and foremost, high-quality prospective outcome studies are needed to validate the effectiveness of the new technologies. Second, more work is needed on system usability, how the systems are integrated into workflows, and how they affect the documentation burden placed on clinicians. For CDS specifically, more focus is needed on patient-centered CDS (PC CDS), which supports patient-centered care by helping clinicians and patients make the best decisions given each individual’s circumstances and preferences.19 AHRQ is already leading efforts in this field with their CDS Innovation Collaborative project.20 Finally, as it becomes more common to incorporate EHR scribes to ease the documentation burden, research on their impact on patient safety will be needed, especially in relation to new technological approaches. For example, when a scribe encounters a CDS alert, do they alert the clinician in all cases? 

In addition to the approaches mentioned in this article, other emerging technologies in early stages of development hold theoretical promise for improving patient safety. One prominent example is “computer vision,” which uses cameras and AI to gather and process data on what physically happens in healthcare settings beyond what is captured in EHR data,21 including being able to detect immediately that a patient fell in their room.22 

As technology continues to expand and improve, researchers, clinicians, and health systems must be mindful of potential stumbling blocks that could impede progress and threaten patient safety. However, technology presents a wide array of opportunities to make healthcare more integrated, efficient, and safe.  

  1. Cohen CC, Powell K, Dick AW, et al. The Association Between Nursing Home Information Technology Maturity and Urinary Tract Infection Among Long-Term Residents. J Appl Gerontol. 2022;41(7):1695-1701. doi: 10.1177/07334648221082024.
  5. McBride S, Makar E, Ross A, et al. Determining awareness of the SAFER guides among nurse informaticists. J Inform Nurs. 2021;6(4).
  6. Sittig DF, Sengstack P, Singh H. Guidelines for US hospitals and clinicians on assessment of electronic health record safety using SAFER guides. Jama. 2022;327:719-720.
  10. Holmgren AJ, Downing NL, Bates DW, et al. Assessment of electronic health record use between US and non-US health systems. JAMA Intern Med. 2021;181:251-259.
  11. Holmgren AJ, Apathy NC. Trends in US hospital electronic health record vendor market concentration, 2012–2021. J Gen Intern Med. 2022.
  12. Co Z, Holmgren AJ, Classen DC, et al. The tradeoffs between safety and alert fatigue: data from a national evaluation of hospital medication-related clinical decision support. J Am Med Inform Assoc. 2020;27:1252-1258.
  13. Wong A, Otles E, Donnelly JP, et al. External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Intern Med. 2021;181:1065-1070.
  14. Parikh RB, Zhang Y, Kolla L, et al. Performance drift in a mortality prediction algorithm among patients with cancer during the SARS-CoV-2 pandemic. J Am Med Inform Assoc. 2022;30:348-354.
  18. Ethics and governance of artificial intelligence for health (WHO guidance). Geneva: World Health Organization; 2021.
  19. Dullabh P, Sandberg SF, Heaney-Huls K, et al. Challenges and opportunities for advancing patient-centered clinical decision support: findings from a horizon scan. J Am Med Inform Assoc. 2022: 29(7):1233-1243. doi: 10.1093/jamia/ocac059. PMID: 35534996; PMCID: PMC9196686.
  21. Yeung S, Downing NL, Fei-Fei L, et al. Bedside computer vision: moving artificial intelligence from driver assistance to patient safety. N Engl J Med. 2018;387:1271-1273.
  22. Espinosa R, Ponce H, Gutiérrez S, et al. A vision-based approach for fall detection using multiple cameras and convolutional neural networks: a case study using the UP-Fall detection dataset. Comput Biol Med. 2019;115:103520.
This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Related Resources From the Same Author(s)
Related Resources