Sorry, you need to enable JavaScript to visit this website.
Skip to main content

Dropping the Ball Despite an Integrated EMR

Save
Print
Ben-Tzion Karsh, PhD | March 1, 2011
View more articles from the same authors.

The Case

A patient followed at a community-based clinic that is part of a large health care system with an inpatient/outpatient electronic medical record (EMR) presented to the emergency department (ED) with a fractured humerus. The patient was seen by a physicians assistant (PA), treated with a sling and analgesics, and referred to an orthopedics consultant. The referral was made through the system's EMR referral module.

The appointment had to be rescheduled twice due to transportation problems. The health care system caring for the patient has a quality standard that a patient must be seen in follow-up within 30 days of a referral. The hospital automatically cancels appointments when patients miss them to minimize black marks on these 30-day reports and asks the original ordering provider to enter a new consult request to restart the 30-day clock.

A secretary at the hospital canceled the appointment, as per protocol, and assumed that the ordering physician would receive an automatic notice of the cancellation, which would then prompt her to enter another referral request. However, because the ordering provider was the PA from the ED, not the primary care physician (PCP), the latter did not receive notification that the appointment was canceled. Presumably, the PA and ED physician, who did receive the notice, thought the PCP had also received the notice and was taking responsibility for the patient's follow-up care, and so they took no action.

Luckily, the error was recognized when the patient finally saw his PCP. At that point, an orthopedics referral was made. While the error led to a delay in follow-up, the patient had no further complications.

The Commentary

This case provides interesting, though common, examples of how seemingly benign decisions or designs can, in fact, be unsafe. Although the case refers to the incident as an error, it is not clear that the events in the case meet the definition of an error. The secretary who canceled the appointment assumed the ordering physician would receive notice of the cancellation. That assumption was correct. The secretary did not have the information to know that the PCP was not the ordering physician. In other words, while an undesirable event happened, it was not a classic error. Instead, it was the poor design of the appointment system that increased the risk of this bad outcome.

In this case, the proximal hazard was a software design that did not provide the users with a way to know who was receiving important information. However, there were distal hazards, or more latent conditions, too.(1,2) This particular health system designed its appointment/referral system to require that patients be seen in follow-up within 30 days of referral. This system undoubtedly flowed from decisions made by leaders and administrators at fairly high levels of the system and clinic. Their decision, though, had direct consequences for behaviors and decisions of people at lower levels of the clinic, such as the secretary and the patient. The irony is that the well-intended rule required the clinic frontline workers to design in a workaround—the automatic canceling of appointments.

This particular type of workaround emerges when the system does not work as needed, and so the user "plays games" with the system to defeat it.(3) This is a serious problem for at least three reasons. First, it wastes the time and cognitive effort of users, who have to defeat a dysfunctional system. Second, playing games with a system creates the potential for increased risk by placing users outside their normal operating zone with the system. Third, if users are successful in overcoming the system, this may hide the fact that the system is flawed in the first place. This last point may be the most dangerous one of all—if clever frontline workers succeed in overcoming poor systems to get their work done, organizational leaders (and those looking in from the outside) may perceive that everything is working well, leading to misplaced complacency.

The design of the appointment system also seems to violate a fundamental principle of usability (4)—design for visibility and transparency.(5,6) The secretary assumed the PCP got the cancellation notice, the PA and ED physician made the same assumption. We could ask several questions about their assumptions and actions: Why did they not take responsibility to confirm? Did they not have a professional obligation to ensure the PCP was notified? Although the answer may be yes, one can easily envision competent, caring health care professionals reasonably believing that the PCP was taking responsibility. Because of this, these are the better questions to ask: Why did anyone have to assume anything in the first place? Why did the system not provide a clear visual indication of who was notified? Performance improvement of any kind requires information about current performance and expected performance so that deviations can be minimized. If neither the secretary nor the PA were given information indicating that PCP was not notified, then they lacked the data they needed to correct the problem. A simple confirmation to the secretary upon cancellation that indicated who was being notified might have solved the problem. If the PA and ED physician could have seen in their own notifications who else was notified, they too would have had a chance to catch and correct the problem. But alas, they were not.

This problem of lack of visibility is common in health information technology (IT). During my own observations of clinicians providing care, I have witnessed many occasions when a clinician pressed "send" or "enter" or "return" only to receive no indication of what the technology was actually doing next. I have also witnessed physicians documenting a visit using dozens of electronic forms, only to repeatedly go back to the first form and scroll through them all. Why? Because there was no indication in the design of the electronic health record as to which forms they completed. After every few forms they would need to scroll back through to see what they completed and what was still left to do. Such systems (unintentionally) compromise patient safety, by design.

But, one could ask, how can I say it compromises patient safety? What evidence is there that patients were harmed? I would argue that no such evidence is needed and that patient harm is only one type of evidence that indicates safety problems. The real questions are: does the design increase the risk of a safety event? Is the design hazardous?(7,8) In this case, the lack of feedback was risky and the evidence is that it was not caught until the patient finally saw his PCP. A lack of harm in no way indicates a presence of safety. It might, but it might not. Is a person who regularly drives drunk, but does not crash, safe? Is a health IT system that obscures relevant data, but does not appear to have harmed a patient (yet), safe? As I've already alluded, low accident rates or patient safety event rates may mislead or lull some into thinking that their organization is safe. That may be true, or it may be that staff are putting forth extra effort to compensate for poor systems. Or as in this case, it may simply be that the patient catches the problem before harm occurs. Or it may be luck, which at some point is likely to run out.

The clinic in the case could have averted this situation. When organizations create rules or measures, they need to conduct risk assessments to discover what unintended consequences could emerge. Although there are quantitative methods to conduct risk assessments, formal qualitative methods can often do the trick. In this case, a "what if?" analysis would have sufficed. The risk assessment team would ask, "What if we implemented a measure of 30-day referral compliance?" and then develop a number of "what if?" scenarios. What would likely happen is the team would realize that patient-driven cancellations happen all of the time, which might make the measure artificial and not meaningful under these circumstances. (In this case, perhaps a patient-driven cancellation would exclude this particular patient from the denominator of the measure.)

Alternatively, the clinic could have framed differently what they did with the data. In the case, noncompliance was described as a "black mark," implying the data were used to reprimand. If the organization was more enlightened, leaders might have seen evidence of low 30-day follow-up rates as an opportunity to identify areas for improvement. In such learning organizations, the workaround would have been less likely to have been developed because the clinics would not have feared being reprimanded.

Then there is the matter of the software. Usability testing with typical cases probably would have missed this design flaw, as this was not a typical case. A cognitive systems engineering (CSE) (9,10) approach would have been more likely to catch this flaw. CSE methods involve determining what the system (not the computer, but the appointment scheduling system, which includes the secretary, organizational goals, and software [11]) needs to do, and then designing new systems to achieve those goals, or analyzing existing or potential systems against that standard. The system in the case needed to know, among many other things, who was being sent notices and that notices were received. The system did not provide either piece of information to users. This was a case—unfortunately not a rare one—in which organization-level decisions and software design interacted to create a patient care hazard.

Take-Home Points

  • Organizational decisions can have profound impacts on how work is carried out by frontline staff.(12)
  • Workarounds are usually symptoms of underlying system design flaws.(13,14)
  • A lack of patient harm does not indicate the presence of safety. Safety efforts should focus on reducing risks to patient safety, whether or not they have produced harm.
  • Software needs to be designed to support the performance needs of the user-software-organization system. Those needs should be identified before purchase so that organizations can work with vendors to ensure products meet the needs.

Ben-Tzion Karsh, PhD Associate Professor

Industrial and Systems Engineering University of Wisconsin

 

References

1. Reason JT. Human Error. New York, NY: Cambridge University Press; 1990. ISBN: 9780521314190.

2. Reason JT. Managing the Risks of Organizational Accidents. Aldershot, Hampshire, England: Ashgate; 1997. ISBN: 9781840141054.

3. Holden RJ, Alper SJ, Scanlon MC, Murkowski K, Rivera AJ, Karsh B. Challenges and problem-solving strategies during medication management: a study of a pediatric hospital before and after bar-coding. Proceedings of the 2nd International Conference on Healthcare Systems Ergonomics and Patient Safety; 2008; Strasbourg, France.

4. Nielsen J. Usability Engineering. Boston, MA: Academic Press; 1993. ISBN: 9780125184069.

5. Klein G, Woods DD, Bradshaw JM, Hoffman RR, Feltovich PJ. Ten challenges for making automation a "team player" in joint human-agent activity. IEEE Intell Syst. 2004;19:91-95. [Available at]

6. Norman DA. The Design of Everday Things. New York, NY: Doubleday; 1988. ISBN: 0385267746.

7. Karsh BT, Holden RJ, Alper SJ, Or CK. A human factors engineering paradigm for patient safety: designing to support the performance of the healthcare professional. Qual Saf Health Care. 2006;15(suppl 1):i59-i65. [go to PubMed]

8. Scanlon M, Karsh B, Saran KA. Risk-based patient safety metrics. In: Henriksen K, Battles JB, Keyes MA, Grady ML, eds. Advances in Patient Safety: New Directions and Alternative Approaches. Rockville, MD: Agency for Healthcare Quality and Research; 2008. [Available at]

9. Hollnagel E, Woods DD. Cognitive systems engineering: new wine in new bottles. Int J Hum Comput Stud. 1999;51:339-356. [go to PubMed]

10. Hollnagel E, Woods DD. Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. New York, NY: CRC Press; 2005. ISBN: 9780849328213.

11. Carayon P, Schoofs Hundt A, Karsh BT, et al. Work system design for patient safety: the SEIPS model. Qual Saf Health Care. 2006;15(suppl 1):i50-i58. [go to PubMed]

12. Karsh BT, Brown R. Macroergonomics and patient safety: the impact of levels on theory, measurement, analysis, and intervention in medical error research: the case of patient safety health information technology. Appl Ergon. 2010;41:674-681. [go to PubMed]

13. Alper SJ, Karsh BT. A systematic review of the causes of safety violations in industry. Accid Anal Prev. 2009;41:739-754. [go to PubMed]

14. Koppel R, Wetterneck T, Telles JL, Karsh BT. Workarounds to barcode medication administration systems: their occurrences, causes, and threats to patient safety. J Am Med Inform Assoc. 2008;15:408-423. [go to PubMed]

 

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources