• Perspectives on Safety
  • Published April 2019

In Conversation With… Timothy B. McDonald, MD, JD

Interview


Editor's note: Dr. McDonald is President of the Center for Open and Honest Communication at the MedStar Institute for Quality and Safety, and Adjunct Professor of Law at Loyola University-Chicago School of Law and the Beazley Institute for Health Law and Policy. An internationally recognized patient safety expert, he served as a lead architect for the Communication and Optimal Resolution (CANDOR) toolkit, supported by AHRQ. We spoke with him about lessons learned over the years regarding event reporting and his insights regarding building and disseminating communication-and-resolution programs.

Dr. Robert M. Wachter: When you went to law school, what was your vision of the balance of your life being a doctor versus a lawyer versus something in-between?

Dr. Timothy McDonald: I really went to augment the work that I did in the hospital. Getting the law degree and learning a lot of those methodologies helped me. I never really intended to litigate. I published in some areas, particularly in child advocacy. But the real goal was to add to my skillset. That all changed when the liability atmosphere changed.

RW: In some ways, the IOM report was an effort to disentangle the legal morass from the world of patient safety. There was a tension between the way a health care system had been approaching patient safety, which was almost entirely through the lens of medical malpractice, to one that was designed to be more engaging of clinicians and not focus so much on the legal world. Coming at it with both a medical and a legal background, how did you see it differently than the more traditional clinicians who were beginning to jump into the patient safety environment?

TM: The primary thing that I saw was how harmful their traditional "delay, deny, and defend" strategy was. I saw the collateral damage from that not only to patients and families, but also clinicians. We had a case where a very famous woman, a COO [chief operating officer], one of the top 100 people in health care, died after we missed this critically low white count. We reached out to our organization and they would not let us reach out to her fiancé or her family. So it was that classic delay, deny, and defend. Forty-three depositions were taken. The doctors, nurses, students, and residents went through hell, and we spent hundreds of thousands of dollars defending the indefensible. As a lawyer I saw that and thought it was crazy. The other part is we didn't learn anything. How ironic is it that we know our care was bad, but the message from our leadership, including legal, was we're still going to defend this. That sends such a hypocritical message to the medical staff when you're trying to focus on what we learned in the IOM report. I viewed that as my charge as a physician to approach this in a completely different way.

RW: People have obviously thought about the dysfunction in practice for many decades before the IOM report. What changed with the report?

TM: There are so many great things in the IOM report. One is the need for us to focus on human factors. The other is to focus on systems issues. The other piece is the notion of just culture. So when we look at harm events, it's this need to move away from "let's shame and blame the one individual and write a legal complaint that focuses on the one or the several individuals" and think of it more in terms of systems doing this. That is a huge paradigm shift.

RW: Event reporting has often felt like something that was drawn from the aviation analogy, yet many systems simply got overwhelmed by the number of reports and didn't have a good way of managing them effectively. What are the lessons learned over the years regarding event reporting?

TM: You can be overwhelmed depending on how you design your reporting system. A lot of noise comes in when people report a lot of stuff. I do think we need to address that problem. I'm encouraged by the work that David Classen and his group have done around automated trigger tools, which is not so reliant on the individual logging in, reporting, and feeling like they're tattling on somebody else. I'm trying to follow that data as closely as possible to see how well the automated reporting work seems to be going.

RW: One of the changes I've noticed in the field over the last decade is a movement away from the individual error as a story with a narrative reported by someone—and moving more toward measurement of harms, a more epidemiologic approach to measurement, and maybe even a beginning to blend the fields of patient safety and quality. Is that a positive change or does it have unanticipated consequences?

TM: I think it's a positive change, but we cannot forget that it doesn't capture a lot of the culture and professionalism issues. Particularly as issues with burnout go up, frustrations go up, it's very difficult for that automated system to pick that piece up. We still need some methodology to capture the narrative on some of the professionalism issues. When I visit different hospitals, one way we know the culture is struggling is I'll ask "Do you use your reporting system as a verb?" In one the health systems, they call their system iVOS. In many of the hospitals, the people would say right away, absolutely we use it as a verb. Often you'll hear people say, "I'm going to iVOS you before you iVOS me." That to me is one of the biggest issues. If you have a system that allows anonymous reporting, it can snowball into some pretty significant toxic issues. In other words, the more people who use iVOS as a verb, the more their safety culture surveys show very negative results in the punitive response to error.

RW: What do you think the solutions to this might be?

TM: What I haven't seen a lot of organizations do yet is, when there are concerns about professionalism, there should be much more of a push to have a face-to-face conversation versus automatically relying on the reporting. I also believe that, in general, reporting should not be anonymous. If you have the appropriate culture, people should feel comfortable putting things in and maybe that would help drive a little more dialogue, because that is what's needed to deal with some of the professionalism issues.

RW: One issue that comes up around the reporting system as a verb is the long tradition of nurses tending to use reporting systems much more than doctors do. It feels slightly asymmetric.

TM: I would agree with that. When I left the University of Illinois, we had way more reporting from physicians. One way we did it was to give an alternative to the physicians, so that they didn't have to log into some pretty cumbersome systems. They were allowed to call a hotline number and report issues that had happened. We had a safety risk manager on call who would get those calls. In other words they would leave a message, the pager would go off, and the risk manager would listen to it and could call the physician back. But the beauty of that was the doctors knew that they would get immediate response but also immediate support. When it came to communicating to patients and families, the people who carried that pager had been identified as super communicators and also understood and knew how to coach the physicians in some of these hard conversations. They also were great at launching our immediate peer support or care for the caregiver program. The physicians learned this is a great number to call because you're going to get all that support. That is how physicians became engaged in notifying the organization about things that had happened.

RW: Let's talk about care for the caregiver and the linkage between clinician burnout and the consequences of errors on clinicians as well as programs to help deal with that. Many of us were naïve in the beginning and didn't think hard enough about what all of this would do to clinicians—not only the impact of an errors on clinicians, but also the challenges of being a clinician in the modern era.

TM: The biggest evolution of our response to harm in the last 4 years is around that issue. Initially, even when we were doing the CANDOR toolkit, the focus was on patient harm and the second victim. It has moved to include supporting physicians when mistakes happen but way beyond that. Much more to the psychological burden many of them have now in this modern era of practicing medicine, where they've lost control—they've lost their autonomy. From a quality and safety side, things need to be much more systematized. Then you throw the electronic health record (EHR) work on there, you probably saw that recent paper about how much keyboarding is done by physicians in their day-to-day work. Then you throw a mistake on top of that, and it feels like the straw breaking the camel's back.

The University of California, San Diego has done some amazing stuff around physician suicide. They have built in and harmonized the work around burnout, wellness, and peer support—when bad stuff happens—to a much more holistic approach. I'm seeing an enormous amount of interest in that. The other part that I've seen is the maturation of peer support programs to much more holistic ones, looking at everybody who may be involved in a case. The solutions could go anywhere from the individualized peer support, like the three-tier system that Susan Scott has used and made famous, to the recognition of how troubled some of the doctors may be and finding ways to get them even deeper help.

RW: How hard a sell has it been making the business case to an organization to invest in peer support?

TM: Some anecdotal evidence is out there. I saw an article out of Hopkins about a general overview of what good peer support could do for both nurses and physicians. They were estimating a substantial return on investment (ROI) in the millions of dollars a year. It's enough to support the program. Since MedStar's move in this direction, staff engagement scores have shown a statistically significant improvement.

RW: Let's turn to apology and disclosure. Did you think that would turn out to be a useful way of approaching the problem when you went to law school, or was it after you finished and you came back into your medical world that you saw it through a new lens?

TM: I would say that I saw inklings of it in law school only because of experiences I was having in the hospital. It started making sense to me after we'd lost that COO. I began looking at all these cases and wondering why we continue to defend these. This is where the legal piece helped me a lot. We'd defend these cases for 4 or 5 years, and we would hear these defense attorneys say these are defensible; depositions are going well. I was on our medical malpractice review committee almost right after law school. And always these cases we thought were horrible, it would be 3–6 months before the trial date, and defense attorneys would come back and say the case doesn't look quite so good. We ought to position this for settlement. And people would just look around the room and roll their eyes and think we've been hoodwinked again by defense counsel. We should have been having this conversation years and hundreds of thousands of dollars ago. Then when we started hearing out of Michigan what Richard Boothman was doing there, he was a great resource for us when we got the go-ahead to hardwire ours. Because we would go to Michigan, he would then come to University of Illinois, and we would share different things that we were doing. Then it became clearer and clearer to me that this whole notion of delay, deny, and defend is not a good idea. Then Steve Kraman's article came out from the VA, and again it started to make a lot more sense from not only the patient safety side but also liability.

RW: Tell us some of the insights that you've learned in building and disseminating these programs over the years and what some of the surprising lessons have been.

TM: The biggest lesson in building these programs is this notion that progress moves at the speed of trust. I was a little naïve early on thinking that physicians would understand the numbers, and they would understand the logic. I underestimated how long it takes to build trust between the system that responds to harm and potentially resolves things in a physician group. It is way more complicated when you have harm events that may involve three or four different liability carriers. The other part is the negative power of punitive state licensing boards and how that weighs on the minds of clinicians who would just love it if the hospital would assume all of the financial liability and there would be no payment above $30,000 in their name, which triggers all sorts of things. That has been probably one of the biggest issues to deal with.

RW: What typically happens if you made a general decision to do communication–resolution, but you cannot then come to an internal agreement about how to apportion the liability? How does that play out in the real world?

TM: In the real world, what I'm seeing is that a system decides to settle a case. They will then engage the independent physician and their insurer to come along. Depending on what the dollar amount is, sometimes the system will eat it to get it settled. The other way is the organization going in, communicating honestly, and try to resolve it on their part, and in a way force function the physicians to come to the table because the hospital is unwilling to pay the entire amount that people have agreed is a reasonable amount to settle the case. Some other groups are doing novel collaborative work like the BETA Healthcare Group in California, where they're having every-other-month meetings to get all the insurers together at the table to come up with a process that people can trust. Their work around responding to harm is called BETA HEART [Healing, Empathy, Accountability, Resolution, and Trust] is really novel. From the collaborative perspective, they'll bring NorCal, the Doctors Company, MIEC, ProAssurance, Adventist Health, Dignity Health, the University of California, and others to Oakland to talk about a better way to respond to harm especially when there are multiple insurers involved. The data seems pretty clear that a joint defense approach is way more efficient. You get money to the family much quicker and with a lot less angst and finger-pointing.

RW: What have we learned about teaching health care professionals how to have these conversations and the best ways of doing it?

TM: In the CANDOR toolkit, the communications skills assessment has allowed us to measure the communication skills of about 1400 doctors, resident physicians, nurses, and other people. They have to do four tasks. One is to create a message for parents of a 39-year-old patient who has had a full cardiac arrest during a GI procedure. They're supposed to write down exactly what they would say in those conversations. What I find fascinating particularly among the physicians is, with no prompting and no training, only about 16% of them will say anything empathically, anything along the lines of "I'm so sorry this has happened to your daughter," or, "This must be very scary for you. You don't know the facts yet, but you do know something terrible has happened to her." When we then do the communication training, we notice that when we use actors and the doctors or residents go up and start having these conversations, there just seems to be this natural reticence to being empathic. The default position is going into the Joe Friday "just the facts" mode. But after the training and after feedback from patient advocates and from the actors, we have seen a huge skills transfer of the people who go through the training about the empowering importance of empathic communication. That is a big lesson we've learned.

Another big lesson is, when we look at the skills assessment and measure it, there is a huge bell-shaped curve where some doctors and residents are really good at communicating or being trained to communicate and some physicians really struggle with communication. They will generate a lot of patient complaints and lawsuits. We've learned that, when you have a bad event, you want to pick a very skilled communicator who has had some training in being empathic, and that makes all the difference in the world. Those are some of the ah-ha moments. Thomas Gallagher is a big fan of training everybody in disclosure. I think he has come to see that not every doctor will be really good at every situation, particularly the really difficult ones. From an education theory, they often come in at that level of unconscious incompetence, and we get them to conscious incompetence. A few of them we get to conscious competence. The goal, particularly for the residents, is to demonstrate how hard it is, so when they have these situations, they call for help. When they call for help, they do get help and they're able to make it through some of these hard conversations.

RW: Would you argue then that the discloser should not necessarily be the attending physician if that physician turned out to be that one person at the bad end of your bell-shaped curve in terms of abilities to communicate effectively?

TM: I would suggest that they not be in there alone. They need to be there because they probably do have a trusting relationship with the patient or family, but pairing them with another person who "gets it" is valuable for everybody, including that physician who often knows that they struggle in those kinds of conversations. I also think having two people there is wise. Some physicians are nervous about that, but having that second person there to reality check and to debrief with the people involved in it is valuable.



Previous interviews can be heard by subscribing to the Podcast

Subscribe to the Perspectives on Safety Podcast now



View Related Interview

Back to Top