Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation with...Allan Frankel, MD

July 1, 2006 
Save
Print

Dr. Robert Wachter, Editor, AHRQ WebM&M: Tell us a little bit about how you became interested in this kind of work.

Dr. Allan Frankel: It was serendipity. In 1995, I was a full-time private-practice anesthesiologist standing in the recovery room next to the Chief of the Medical Staff. One of my colleagues walked by, pointed to me, and said, "You ought to make Allan chairman of the pharmacy and therapeutics (P&T) committee." Two days later, I was. Although the whole issue around drug safety didn't really exist then, as an anesthesiologist I loved thinking about drugs and medications.

In 1997, Newton-Wellesley Hospital had two highly publicized maternal deaths in the space of a week. The regulators came down with a fierceness that almost put the hospital out of business. Because one of the deaths may have been drug-related, the hospital created a position of medication management officer. Because of my experience as chairman of the P&T Committee, I was asked to do that. At that time, after the Betsy Lehman event but prior to the IOM Report in 1999, I found myself in a position to begin thinking about medication safety. Part of struggling with the issues led me to the Institute for Healthcare Improvement (IHI) where, with a few others from the hospital, I participated in their second Adverse Drug Event Collaborative, chaired by Lucian Leape. Then the process just catapulted. The next year I was asked to co-chair an IHI collaborative. We began making significant changes at Newton-Wellesley, which is a community teaching hospital. By 2000, Partners HealthCare (which includes Brigham and Women's Hospital and Massachusetts General Hospital) created a patient safety position and I was selected to fill it. Now I do it full time.

RW: How much of your interest in safety came from being in anesthesia, the medical field that became interested in safety 10 years before anyone else?

AF: I think anesthesiologists are trained in a slightly different mold than other clinicians. When you think about it, anesthesiologists can replace each other very easily, and that's actually one of the characteristics of reliable systems—like pilots of airplanes—you can take one out and put another one in. Systems thinking also comes naturally to anesthesiologists. But some of my focus on safety came from my own interest in sociology, psychiatry, and the psychology of behavior.

RW: When you started in your safety position, what did you think you'd be able to accomplish and what has most surprised you along the way?

AF: What became readily apparent was how poorly some of the systems were designed and how you could standardize and simplify processes if you could get physician engagement. For me, this began with formulary management—we decided to ask department chiefs to go to their department members and agree upon use of new drugs before we put those drugs on the formulary, which immediately helped promote standardization. That was probably one of the more satisfying aspects of the work, figuring out how to standardize care and bring the evidence more effectively into the process of care at the same time.

RW: What have you learned about physician pushback in terms of standardization, and what tips would you give those who are getting involved in this field as they try to achieve that?

AF: As physicians, we've all been trained to value our independence and our independent thinking. Accordingly, we believe that hospitals are there to allow us to bring our skills to specific areas, but the hospitals are not to intervene in that care, necessarily. Probably the most important lesson from the IHI is to find champions—those people who are interested and invested in trying something different—and to put less time and effort into those folks who are not interested in being innovators or early adopters; you put so much energy into trying to make them change, it's difficult to get them on board. The diffusion of innovation really applies: take physicians who are willing to try something different and new, get them to try it, and if it has value, it will spread through the department without your having to exert a great deal of effort. That's probably the most important lesson I've learned.

RW: Many places are just starting to think about how to structure a safety program, who should be its leader, and what their responsibilities and reporting relationships should be. Can you explain what makes an effective patient safety program and officer?

AF: Over the past decade, we have started to develop a picture of what comprehensive patient safety should look like in an organization. For most organizations, you still need physicians in the position of patient safety officer. Certainly nurses, pharmacists, and other specialty folks can do very well, but physician engagement is still a significant barrier. Many organizations have elevated the safety officer to an executive-level position, and I think that's appropriate. Those organizations with safety directors, or even lower in the organization, safety managers, need to either provide a great deal of support for those individuals or they're going to be less able to make the needed changes.

These individuals must have a broad array of skills. Being a patient safety officer requires the ability to negotiate and influence, because cultural changes require a great deal of discussion. They must have an understanding of accountability principles and the underlying concepts for a fair and just culture, which includes the disclosure of harm to patients and the human-resource aspects of assessing unsafe acts. A critical area of knowledge is gaining insight into team behaviors and communication skills.

Another area—data management around safety—has been for the most part in the domain of quality personnel, but now is beginning to be important for safety officers. If you think of data in the safety world as reported events—stimulated reporting, spontaneous reporting, or observations of behaviors—underneath that, you have the auditing and surveillance processes. The strong organizations are going to be able to look at all that data and cycle it through operations in their organizations to turn it into real actions that change the environment. They're also going to make sure that feedback occurs, so that these actions get back to the providers who send in the reports. It's a cyclical process of data management. It takes a lot of leadership skill to do that effectively. Underneath that is the methodology for making changes, which includes reliable design, the rapid cycle improvement methods from the IHI, and the other methods such as Six Sigma and Toyota Lean. A patient safety officer must have all those skills on top of the clinical skills and knowledge of evidence-based medicine and management to drive specific changes. You also have to give them the wherewithal to make the changes. To me, that requires an executive-level person.

RW: Talk about the interface between the patient safety officer and the quality, risk management, compliance departments, and the Chief Medical Officer.

AF: If you're in a small organization you're going to wear multiple hats. At a place like Partners, which is huge, these get divided among different individuals. In large organizations, the quality people are managing performance on scorecards, on CMS measures, on outcomes-based data. I differentiate that from the data I talked about earlier, which has to do with relationships and environmental concerns. The quality folks can be more rigorous with the data that they're looking at. It's easier to analyze, meaning that it can be pretty straightforward how many central line infections you have, compared to measuring attitudes about teamwork or safety. But when you get to measuring culture and you're starting to talk about attitudinal surveys, the measurements you're using are, at best, kind of fuzzy.

In regards to risk management and safety, I think risk managers' jobs have morphed in the last few years. Now, their primary job should be to work with the patient safety officer to help improve the environment and systems of care. Once litigation actually occurs, the patient safety responsibility ends, and the risk management process begins; you're dealing with a different process—with malpractice and the tort system, with legal thinking, which has everything to do around blame and negligence. But, prior to the legal cases, risk managers review their severe incidents and claims, and patient safety officers review incidents and adverse event reports. Essentially, they're trying to glean similar information—where's the glitch or weakness in the system. Their goals are almost identical, and their working relationship should be pretty close.

I don't have that much interaction with the compliance folks. If you look at the JCAHO's, CMS's, and NQF's safety regulations and goals, they cover a wide array of areas, including information technology, evidence-based practice, nursing practice, and all of the cultural pieces I talked about before. It is not feasible for the patient safety officer to be the conduit for all of those regulations. A smart organization will look at that and say, even though this is a patient safety standard, it has such a quality or IT component that other individuals should shepherd this process forward in the organization, not necessarily the patient safety officer.

Lastly, the Chief Medical Officer deals with issues that a patient safety officer might not—with hiring and management of physicians and negotiating with physician groups, dealing with problem physicians and problem issues, and interfacing with the outside boards and regulatory groups around problem individuals. While the patient safety officer might be called in to work in tandem with this Chief Medical Officer to think from a systems perspective around an issue, if you derail the patient safety officer's work to deal with the problem personalities, they won't be able to do the systems-based work that's needed.

RW: What's the interface between what you're doing and the information technology department?

AF: Information technology enables clinicians to do their work. The job of the patient safety officer and the IT person is to understand the human factors involved—the relationship between human beings and their environment—to facilitate effective implementation of IT technologies. Again, one of the risks is that you can get buried in IT implementation really quickly. There are numerous management and interface issues between clinicians and IT. All of that has to be managed effectively, and the patient safety officer has to play a role. But, the unique skill of the patient safety officer is around the culture and the information technology enables that. The interface can be large or small depending upon what the implementation is that you're dealing with. To give you a specific example, the implementation of computerized physician order entry is IT based. But to develop the decision support that enables the rules that are going to show up as alerts and so forth, the patient safety officer may need to meet with the appropriate clinicians to help develop the rules. The IT person will then take and turn them into the decision support process.

RW: Give a little philosophy about top down versus bottom up, especially in large organizations where the microclimates are likely to vary tremendously from one floor to another, from the OR to the ER?

AF: It's an interesting challenge. From the standpoint of a patient safety officer, the ideal system to manage would be a top-down organization, I think. I say that because if you want to be able to make system-wide change, it's very nice to have the control at the highest level that you can then channel down easily. Obviously, a lot of changes in patient safety around teamwork, for example, have to do with relationships. And those are all local. So, a balance between the two is probably what you really need. Very few organizations actually achieve that. In my experience, as I look at organizations that are very top down, the frontline providers tend to feel disenfranchised. If I look at organizations where the control has been seeded down to the grassroots by department and so forth, as those groups become more powerful and siloed, they can get to a point where it's impossible to make effective change. The goal is a careful balance that's fluid and an environment that's transparent enough to allow issues to rise to the surface quickly. Then, you can assess whether they can be addressed locally or whether they need to move up to a higher level. If they need to move to a higher level, the authority can move back and forth to appropriately address them. I think that leadership walkrounds helps promote this transparency and fluidity.

RW: Let me ask you a question about transparency. Let's say, hypothetically, that in one of your hospitals, a terrible error leads to a patient's death. How do you think about whether to disseminate information about this broadly, knowing that only one e-mail separates you and the local newspaper?

AF: After supporting the patient's family and the caregivers involved, my interest first and foremost is that the next patient not be harmed. I'm not going to let the media—nor, quite frankly, the lawyers—dictate my actions, because doing so adversely affects the good decisions that need to be made. In the interest of honesty and good care, you have to let the public know about an event and apologize if necessary. Secrecy gets you nowhere. So, I think the answer is that if I'm sitting in an organization and a terrible event occurs and I look at it—the answer is that the e-mail goes out to protect the next patient. There's just no question. And, then when the media comes knocking on your door, the answer is that we're health care. We're not other industries, and our job ethically is to transparently promote a reliable and safe environment. To do that, you need transparency. I would be promoting that transparency and acting on that transparency so that others could see. Ultimately, if the public knows you're willing to be transparent, and you always are transparent, they'll realize you're a straight shooter and support you. It's the holding back and the carefully crafted, uninformative answers that anger the media and the public.

RW: How specific should the information be?

AF: You want to state what has happened, but you don't want to conjecture until you gain the information. I think it's reasonable to say what happened, and then to be sure that the people fielding these issues to the public are skilled enough to say, "I can tell you that this is what has happened, and I can assure you that we will inform you further as we gain more information," and be as candid as possible. In the long run, if you take that approach, when it's reviewed, the answer will be that we did what was right. We described what happened. We didn't conjecture, and then as the information came out, we proceeded to give it in a way that would ensure safety and reasonably inform the public.

RW: Working in a place like you do undoubtedly has some unique advantages but probably some parallel challenges. You have a lot of very smart, accomplished people who are used to getting A's on tests. Is it easier or harder to make some of the needed cultural changes at Partners than it might be at another institution?

AF: My experience at IHI, running collaboratives, has led me to think that if I were to pick the ideal size hospital to make changes in, it would be one that has about 200 beds, maybe 250—I think 300 gets a little big. If it's too small, you don't have the resources. There are some interesting sociology theories and research around the ideal number of people to have in a group. One company built buildings for about 150 people; when they got bigger, they built another building. That size makes it possible for everyone to know each other's name. In a hospital that has about 200 beds, the medical staff tends to know one another, and the hospital will probably have enough resources to finance the kinds of innovations and changes that are necessary. In an organization like that, it has less to do with the intelligence of anyone in the organization and much more to do with how effective the leadership is. The resources are less of an issue than the leadership. If you have outstanding leadership, you can make places like that just shine.

Now, if you look at making changes at the Mass General Hospital or the Brigham and Women's Hospital, they are much larger institutions, and it's harder. One must spend time on molding new ideas to a well-established culture. But, it's not the intellect; that intellect, as you say, is double-edged. On the one hand, people have good ideas, and they take up good ideas, quickly, and they effectively manage big projects. On the other hand, with that kind of intelligence comes a degree of self-assurance and a perspective that can make it tougher to achieve standardization and conformity—all of which are key ingredients to achieving safety. But more than anything else, it's the size of those organizations that's the challenge. If I go out to community hospitals, they're wonderful to work with simply because they're a bit smaller, with very bright people, same kinds of issues, but it's just easier to make the changes.

RW: For every patient safety officer who has done all this incredibly hard work and made these changes, and then a nasty horrible sentinel event occurs around something that you thought was fixed. How do you keep your own energy and enthusiasm up? And, how do you keep the organization's enthusiasm up when there are glaring failures that we all experience despite all the hard work?

AF: The answer there is actually pretty straightforward, and it's almost a purely mathematical model. If you think about how systems function, any process from a physician writing an order to the delivering of a medication to a patient coming in for a procedure, the human factors perspective tells the story. You can break any one of those processes down into a set of steps. So, the medication delivery system might be 40 steps. Every one of those steps is done in some measure by a human being, which means that it has an intrinsic error rate associated with it. You can simply multiply out the intrinsic error rate for every step in the process to figure out the probability of a perfect end result. It's pure math.

If you then simplify this process and, for example, take the medication delivery and decrease it from 40 steps down to 20 steps, the total error rate will decrease the error rate, for example, from a success rate of 96% all the way up to a success rate of 99%. It will decrease the error rate from 4% down to 1%. It'll have improved it by 75%. But in that 20-step process, I'm still at risk for a 1% error rate. Once you understand that, and the math is pretty accurate when you know how human beings err and they're working in a complex system, the fact that events occur becomes much more tolerable.

In terms of keeping people's morale up, I would obviously describe that differently if I was going to talk to people after an adverse event. The simple description is that if you take human beings with their innate frailties and put them into complex systems—which health care is the most complex system that human beings have devised—you will be guaranteed an irreducible error rate, no matter how you fix the systems and processes.

If you know that there will be irreducible error rates because of human behavior in a complex system, then you're in a position to drive the necessary changes I've talked about. Fair and Just Culture, Teamwork and Communication Training, and leadership engagement all work to make human errors visible because these tools build social relationships. They'll be most effective when the physical layout is optimal for delivering care, and where IT really makes information flow to the right places. Ultimately, if an organization can address these three areas—physical layout, information technology, and social relationships—then the patient safety officer's job becomes much easier.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources