Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation with...Lucian Leape, MD

August 1, 2006 
Save
Print

Dr. Robert Wachter, Editor, AHRQ WebM&M: What kind of career did you fashion for yourself prior to getting involved in safety and errors?

Dr. Lucian Leape: I was in the early wave of pediatric surgery. When I decided to do that, it was really the beginning of the specialty. It was very exciting—we were pioneering and developing a new specialty. I'm fond of making the observation that, when I was a resident, the mortality in neonatal surgery was about 25%. Most of that was because of our inability to nourish or ventilate infants. Those problems were solved in the next 10 years or so. By the time I quit pediatric surgery 20 years later, neonatal mortality was down to around 2% or 3%. So I pursued my career in academic pediatric surgery—did the usual things, research, wrote papers, and taught, and I was very active in professional societies for about 20 years. I was very involved in the establishment of the Pediatric Surgical Association, which we formed in order to get specialty status and have boards.

RW: Was there a moment that you said, I'm going to go more strongly in this direction of systems change and policy, or was that a gradual evolution?

LL: Well, there was a moment when I decided I was going to leave surgery. I had done the usual academic things, and I was wondering what I was going to do for the next 20 years. I was always interested in how to change things and make the world a better place, but I guess everybody is. It was clear to me that we had major policy problems. And like many people, I really thought the biggest problem was that the economists had taken over, and it was time for the doctors to get back in and straighten things out. Which is what we've done, right? (LL chuckles)

RW: Of course, it's all fixed. Your decision to quit clinical practice cold turkey strikes me as being unusual. Most folks would be more gradualistic and say, I'm going to continue practicing one day a week. Did you try that for a while?

LL: I don't think you can do that in surgery. I'm almost certain you cannot do it in pediatric surgery. You cannot be a 2-day a week surgeon and operate on newborns. You're not going to be able to take care of your patients. I'm one of those people who thinks that it's immoral to operate on people and not take care of them afterward. So the only way I could have stayed in surgery would have been to become a hernia surgeon or just do trivial stuff. I didn't see any point in that.

RW: When you made this transition, did you have a clear vision of what you wanted to accomplish?

LL: I wanted to work in health policy, but I wasn't quite sure what that would be. I decided to spend a year in a mid-career fellowship at the Rand Corporation. It gave me a basic background in statistics, epidemiology, and various other health policy issues. Then I came back to Boston looking for work. I didn't really know what I was going to do.

RW: Did a particular person help steer you in the right direction?

LL: For me, very interestingly, like Don Berwick, it was Howard Hiatt, and it was almost accidental. When I returned, Dr. Hiatt contacted me because the Medical Practice Study was just beginning, and they needed some more people with a clinical background. He took an interest in the fact that I had some training in health policy work. I initially told him I wasn't interested. I thought it was a study of malpractice, and I didn't leave clinical medicine to study malpractice. But he convinced me that it was going to be a lot more than that, so I signed on. In all honesty, I really didn't have a whole lot else to do. I was writing a review on unnecessary surgery, which I started at Rand, but I wasn't involved in any research projects. So it was an opportunity to participate in what seemed would be a rather seminal study, and it sure was.

RW: As you were in the middle of that study, what was your sense of its potential?

LL: We always were convinced it was an important study, if nothing else, because of its magnitude. Looking at 30,000 patients gives you some clout. None of us had really thought much about the preventability issue, and nobody knew anything about systems, of course. We weren't completely surprised by our results, because earlier work had shown similar findings. But we were, shall we say, dismayed to find that 4% of patients had adverse events. The surprise for me was that two thirds of them were caused by errors. I'll never forget—I went to the library one day and did a literature search on what was known about preventing errors, and I didn't find anything. And I went to the librarian and said, "I'm interested in how you prevent medical errors, and I've found papers about complications, but nothing much about errors." And I asked her to look over my search strategy because I was not finding anything. She looked at it and she said, "Well, your strategy looks all right. Have you looked in the humanities literature?" And I sort of looked at her and said, "The what?" I know what humanities are, mind you. But it really never occurred to me. So she tried the same search strategy in the humanities literature, and boom, out came 200 papers. I started to read them and discovered James Reason and Jens Rasmussen and all those people. A year later, I came up for air and realized that we in health care could use this. If I didn't know how errors happen, most other people wouldn't know it either. So I decided to write a paper.

RW: So, a medical school librarian set off the modern patient safety movement?

LL: Ergo, there we go.

RW: What went through your mind as you first read Rasmussen and Reason?

LL: Well, what struck me was that what they were talking about made so much sense, and seemed so obviously applicable to health care. So I wondered, why aren't we doing these things? That's why I wrote the "Error in Medicine" paper. And then, based on that I thought, we have to see if we can prove this in practice. So I wandered around and discovered a guy by the name of David Bates. I walked into his office and said, I understand you're interested in medication errors. And I said, let me tell you about systems theory and this looks like a good place to start because everybody really understands the "medication system," and it affects everybody. So if we can show anything here, we'll get people's attention. So we did some pilot studies, and then we set out to do a big study to see if we could find adverse drug events, and then see if we could determine indeed whether there were systems failures... and then of course, to see if we could change the systems to reduce them.My whole approach to research, frankly, is that I'm only interested if I can see that it will make a difference in practice. I have tremendous respect for, but little personal interest, in basic science research and theory. So from the very beginning our question was, does systems theory work in health care? Can we find systems failures? And then, can we show that we make a difference with some changes? So it was very focused and very targeted and turned out to be very rewarding.

RW: Talk for a moment about the "Error in Medicine" paper. Was there any sort of back story in getting that published?

LL: It was interesting because when I wrote the paper, I showed it to my colleagues in the Harvard Medical Practice study, and they tried to talk me out of using the word "error." They said, "This is a red-flag word and you'll just turn people off." And I said, "But that's what it's about. You cannot write a paper about error and not talk about it." And sure enough, the New England Journal of Medicine bounced it so fast, I don't even know if they sent it out for reviews. But JAMA took it. George Lundberg [former JAMA Editor] got a lot of negative feedback. Interestingly enough, I got none—no hate mail. It was all directed towards the editor.

RW: When you were writing it, did you have a sense that this was going to catalyze something important?

LL: I did. I thought, you know, this is important stuff. This has the makings of a paradigm shift. Let's just see what happens. I was so impressed when I read [the literature]. It just seemed to me that it had tremendous power, and it did.

RW: The level of resistance that you've seen over the years from all different fronts—from using the word error to talking about systems—what has surprised you the most about it?

LL: Well, actually, the resistance hasn't surprised me, because I know doctors are very conservative. As [former Harvard Medical School Dean for Education] Dan Federman said, "All doctors are in favor of progress; it's change they hate." What I've been surprised at is the acceptance. I mean it's been very heartening to me to see how many really good people, such as yourself and Don Berwick and others, have taken this up and run with it. That has been the most rewarding thing in my professional life—to play some part in getting a tremendous number of very energetic, creative people involved in trying to figure out how we do all this. Of course it's hard. We're talking about a major culture change. It's going to take years. But I think it's really moving.

RW: It seems to me you've shifted focus over the last few years from the role of systems and human factors to the role of the individual doctor and the doctor-patient relationship. You've written and spoken quite a bit recently about accountability for providers, and about the importance of apologizing to patients for errors. Was that a conscious change in focus on your part?

LL: Yes. It seemed to me that I have less to contribute to the ongoing efforts toward system change. We just finished a collaborative to implement safe practices here in Massachusetts: reconciling medications and communicating critical test results. But it's been obvious to me that there are other people who can do that sort of thing better, not the least of the reasons being that I've been out of clinical practice now for some time. Also, it seemed to me there were enough people carrying that ball that I could back away from it. But there were other areas that we were not paying any attention to. And although I've been outspoken about saying "it's not bad people; it's bad systems," the fact is that we do have some people problems. I don't think they're bad people, but certain problems are being totally ignored. It seemed to me that this was another systems challenge—that we don't have good systems for identifying and helping providers who need help. It was another opportunity to start the conversation about something we need to talk about.

RW: I imagine that it's hard for people to hold these thoughts in one brain—that it's mostly about systems, but in fact there are some "bad" people. Do you worry about it, to some extent, undermining the message about systems thinking?

LL: Well, I don't worry about it, maybe I should.

RW: I only ask because I feel this myself when I say that both views are correct. It's mostly about systems that don't work very well, yet we do have to acknowledge that some people are not practicing as well as they should and then deal with that more effectively.

LL: Well, my answer to that—which I have to admit sounds a bit sophistic but I believe it—is that it really is all about systems. The deficient performance issues of individuals are a major systems problem. We do not have good systems for making sure they're well trained and maintain competency. We don't have good systems for assessing their performance. We don't have good systems for helping them. That's exactly what I'm trying to get people to look at. Let's improve systems so that we're sure that the doctor who treats you is competent and mentally balanced. I don't really see that as a dichotomy. I just think it's another set of issues to which we must apply similar thinking. That is, getting beyond blame, punishment, and training and going on to saying, how can we set it up so that we can do all these things much more efficiently and completely?

RW: As I was reading your apology document, I was struck by the clear statement on the first page, that this is not about a business case, it's a moral and an ethical case. How do you see the relative role of these two types of motivations in terms of the overall safety movement?

LL: On the apology front, I'm very concerned that people put too much emphasis on the fact that this may reduce your chance of being sued. I happen to think it will, but I don't think that's the reason to do it. I've been struck by the fact that openness, transparency, and full disclosure and apology when indicated is not only an essential part of treatment for the patient, it's also an essential part of treatment for the doctor or caregiver. And I don't think that case has been made. Because when you're not open and honest with people, you're lying. And lying corrupts your own integrity. So honesty, transparency, and apology are just as important for the doctor as for the patient.But the primary reason to do it, of course, is our obligation to the patient. We need to think of it as a form of treatment. The patient has a serious medical condition: an incipient loss of trust, and emotional turmoil because of what's happened. We need to do what we traditionally do, which is to think of what's in the best interest of the patient and prescribe and carry out that therapy. My hope is that if we can frame it that way, then we can help physicians recognize the importance of doing it. A side benefit is that they will indeed be much less likely to be sued and, if they are sued, the payments will be much less. But that's not the primary motivation.

RW: What are the most enduring legacies of your work in patient safety?

LL: I think the exciting thing is that people now talk about safety all the time, and they're gradually learning how to do it. It's still going to be another decade before we get where you and I would like to get it. But it's only been a decade. So I'm pretty optimistic about the future of safety.

RW: Where do you think we actually will be 10 years from now?

LL: Well, I'm very optimistic. I think there are several currents that are very encouraging. The first is that people like Peter Pronovost have shown that we're not just talking about what an individual enthusiast can accomplish, but things that are replicable. He now has more than 100 intensive care units in Michigan that have gone for almost a year without any central line infections. That's absolutely astounding. There's no reason why we cannot have that level of perfection in every hospital. Within the last year, we have moved from saying these are safe practices that demonstrably make a difference to saying these are safe practices that everyone can do to make a difference. The agenda there is big. Starting with the great work that you all did a few years ago identifying those practices with good evidence, plus the tremendous amount of evidence that's come out since that time, we have an ever-expanding list of safe practices. And we have more and more motivation to implement them because we have results. Those results are not just saved lives, but a lot of saved dollars, which will help answer the business case. On the area of interpersonal relations—and I'm of the school that believes that safety is all about relationships—there's a lot of enthusiasm and excitement about team training and simulation. I expect that to grow and to help change the culture more than anything.Finally, I've been very encouraged by the response we've had to this effort to improve honesty and promote disclosure and apology. The response of our hospitals was not at all one of resistance; it was "How do we do it?" As I look at it on various fronts, we've really had incredible changes in the way a lot of people think about these issues. Now having said that, I'm fully aware that the great majority of doctors don't agree with much of what I've said. They are at the flat of that adoption curve. But the rest of us are clearly on the uphill climb, and I think that the next 10 years will push us over the top.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources