Sorry, you need to enable JavaScript to visit this website.
Skip to main content

Patient Safety: A Perspective from Office Practice

Richard J. Baron, MD | May 1, 2009 
View more articles from the same authors.


Most patient interactions with the health care system occur in the outpatient setting. Many potential and actual safety problems occur there as well.(1) Yet patient safety literature and practice do not seem to have reached deeply into ambulatory care. This is likely due to a combination of factors: in most practices, there is no layer of administration providing a second look at routine policies and procedures; there is no accrediting agency, like The Joint Commission, to mandate safe practices (2); and those of us in office practice are so consumed with simply getting through the day that it is difficult to recognize the problems, large and small, that can lead to major safety hazards. The business case for safety, such as it is, relies almost entirely on the malpractice rate-setting process: errors that result in litigation lead to higher premiums and personal and professional misery. However, as Studdert (3) has argued, relying on the malpractice system to identify and "correct" errors is unlikely to be timely or productive.

Data from a consortium of malpractice carriers (PIAA) indicate that the leading cause of paid malpractice claims in primary care specialties is delay or error in diagnosis, most commonly involving breast and lung cancers or acute myocardial infarction.(4) This might suggest a set of cognitive errors (i.e., doctors have simply made incorrect diagnostic judgments), but a deeper look at the data suggests otherwise. There is some descriptive literature on errors in the office practice of primary care.(5) For example, the Robert Graham Center for Policy Studies in Family Practice and Primary Care asked 50 board-certified family physicians from around the country to look for and report on events "that should not happen in my practice, and I don't want it to happen again." Eighty-three percent of errors were process errors (mishandling or unavailability of important information, including errors handling results of lab work and diagnostic imaging; communication errors; and medication-related errors). In the judgment of the reporting physicians, only 13% of errors were knowledge errors.(6) What is perhaps most striking when looking at this literature (other studies have similar findings) is how obvious the results are to those of us in practice. I suspect that many in primary care practice would identify having incomplete information, or not placing the information we do have in the proper context, as the root cause of many potential unhappy outcomes—far more so than our lack of clinical knowledge. As with the literature on hospital errors, it may be that "normalization of deviance" (7) is the most dangerous issue we face. By becoming inured to an unacceptable status quo, we miss countless opportunities to improve it.

An old Polish proverb states, "A fool who trips over the same stone twice deserves to break his neck." This perhaps captures something essential about errors in the office: many of them are the result of foreseeable problems that have long been ignored or tolerated in the office environment. It could be argued that physicians' learned helplessness training begins early in residency, where we divide up "clinical" (interesting, central to our work) and "administrative" (boring, easy, somebody else's problem) activities into two distinct buckets. Because we have so little control over the administrative activities (can anyone in residency improve Hospital Transport?), we become habituated to not accepting responsibility for administrative dysfunction. We bond with our patients, both of us blaming invisible administrators for all the frustrations that we and our patients experience, and we focus our attention on "real medicine." If the PIAA data and the descriptive literature cited above tell us anything, it's that the greatest potential for safety improvement will come not from deeper medical knowledge but from designing and operating more reliable office systems. And, of course, with the majority of U.S. physicians practicing in groups of less than four (8), we will need to rely on systems created by practicing physicians in those small groups to achieve dramatic improvement in national outpatient safety.

The biggest challenge to office practitioners is managing a daily barrage of information. It is fair to say that, almost above all else, primary care physicians are engaged in the information tracking and aggregation business. Ensuring appropriate follow up on received and requested information, especially without technology support, is an ongoing and largely insurmountable challenge. When a physician orders a test or a referral, she is initiating a process that can miscarry at each step of the way: the patient may not follow through to get what has been requested; the results may not arrive back at the office; the receiving physician may not follow up appropriately on the results if they do arrive. Each of these missteps happens with stunning frequency in office practice, and each creates a potential for injury that must be managed differently. Sheer volume is a consistent part of the challenge and needs to be acknowledged as such. It is not news that primary care offices are busy, so the opportunity to prevent errors of this kind lies in designing office systems robust enough to perform well under the ever-present stressful and busy conditions.

At the ordering phase, the best practice is considered to be a tracking log of all ordered tests and referrals. Most practitioners find this completely infeasible. The volume of follow-up work created is simply unmanageable. Our five-physician practice generates at least 250 nonroutine referrals to specialists per month and, with 1600 women of mammogram age and 3400 patients of colonoscopy age, the burden of tracking and managing specialty and preventive care referrals quickly exceeds the capacity of any log. If one does attempt to keep a log, the maintainer of the log must be integrated into the "received reports" process, a step that inevitably generates delays of its own and is quite cumbersome. Perhaps the most practical modification of this approach is to apply triage principles—the ordering physician decides, at the time of the order, if one of two risk factors for consequential failure is present: the study is important enough that failure of cycle completion is highly likely to be a clinically significant event, or a problem in completion can reasonably be suspected at the time the test is ordered (perhaps the patient has signaled reluctance to follow through or has a history of failing to follow through in the past), or both. The decision to create such a triage system for follow-up studies has the virtue of making the list more manageable even if it has the vice of leaving potentially important things off the list. Technology offers an imperfect solution here: it is possible to track all orders electronically, but unless tracking numbers are assigned and consistently carried through to the results, reconciliation is still a manual process, and given the multiplicity of "result generators" with whom each primary care physician works (one study of Medicare patients found that each primary care physician had an average peer network of 97 other physicians for each 100 Medicare beneficiaries in the practice [9]), we are a long way from realizing a technical solution outside of integrated delivery systems.

In our own office, though we don't have hard data on this, I believe that the full-featured electronic health record we implemented in July 2004 (10), used in a consistent and timely fashion, has had a major impact on our ability to deliver safer care. We are much more likely to know what others in the office have done, even this morning. We reliably view information in a context (what was the creatinine last time?). And we are able to mobilize a team to help us accomplish some of the follow-up functions that make care safer (for example, having a staff member call a patient tomorrow or next week to find out how things are going after a visit or with a new medication). We also do a better job communicating vital safety information (medication and problem lists, allergies, etc.) to others with whom we practice (emergency departments, consultants, hospitals). However, the practice standard doesn't seem to contemplate the availability of the kind of information we can reliably provide, so it is often not sought by colleagues—and is occasionally disregarded—even when present. But the technology has definitely helped us keep up with the large volume of tasks we face daily (prescription refills, communication to patients of lab results, reminders of preventive services, or guideline-based care [11]), and our practice would be impossible without it.

Although systems and technological solutions are vital, perhaps the most important thing that those of us in practice can do to improve safety involves a "frame shift." Every day, we practice from the perspective of our own lives and our own experience. A phenomenologic analysis of medical mistakes argues that a better way to frame mistakes is to think of them as actions that "go wrong" in time.(12) At the time we took the action, we likely believed it was right, or else we wouldn't have taken it. Opening oneself to an imaginative inquiry into how things could go wrong as we do them may be a very powerful safety tool. Some authors have argued that patient centeredness is itself one of the best ways to minimize errors in practice.(5) Though I could not find robust literature addressing this, I do believe that being able to imagine how the care experience will work for the patient could get us out of the unexamined rut that allows us to overlook so many of the deficits in every day practice. Instead of knowingly explaining why we don't have the information we need or why the call didn't get returned, we would do well to honor patients' frustrations and see in them opportunities to better meet their expectations and more safely and reliably take care of them.

Richard J. Baron, MDCEO, Greenhouse Internists, PCChair, American Board of Internal Medicine


Back to Top

1. Gandhi TK, Weingart SN, Borus J, et al. Adverse drug events in ambulatory care. N Engl J Med. 2003;348:1556-1564. [go to PubMed]

2. Wachter RM. Understanding patient safety. New York, NY: McGraw-Hill Professional; 2007. ISBN: 0071482776.

3. Studdert DM, Mello MM, Brennan TA. Medical malpractice. N Engl J Med. 2004;350:283-292. [go to PubMed]

4. Cumulative Data Sharing Report: January 1, 1987-June 30, 1998. Rockville, MD: Physician Insurers Association of America; 1998.

5. Davis K. Patient Safety Issues in the Office-Based Setting. Mechanicsburg, PA: PMSLIC Insurance Company; 2008.

6. Dovey SM, Meyers DS, Phillips RL Jr, et al. A preliminary taxonomy of medical errors in family practice. Qual Safe Health Care. 2002;11:233-238. [go to PubMed]

7. Vaughan D. The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. Chicago, IL: University of Chicago Press; 1996. ISBN: 9780226851761.

8. Woodwell DA, Cherry DK. National Ambulatory Medical Care Survey: 2002 summary. Adv Data. 2004;346:15. [go to PubMed]

9. Pham HH, O'Malley AS, Bach PB, Saiontz-Martinez C, Schrag D. Peer networks of primary care physicians and the challenge of care coordination. Abstract presented at: 25th Annual AcademyHealth Research Meeting; June 2008; Washington, DC.

10. Baron RJ, Fabens EL, Schiffman M, Wolf E. Electronic health records: just around the corner? Or over the cliff? Ann Intern Med. 2005;143:222-226. [go to PubMed]

11. Baron RJ. Quality improvement with an electronic health record: achievable, but not automatic. Ann Intern Med. 2007;147:549-552. [go to PubMed]

12. Paget MA. The Unity of Mistakes: A Phenomenological Analysis of Medical Work. Philadelphia, PA: Temple University Press; 1988.



This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Related Resources From the Same Author(s)
Related Resources