Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation with...Peter J. Pronovost, MD, PhD

October 1, 2010 

Editor's note: Peter J. Pronovost, MD, PhD, is a Professor of Anesthesia, Critical Care, and Health Policy at Johns Hopkins University and Director of the Johns Hopkins Quality and Safety Research Group. He may be best known for having led the Michigan Keystone project, which used checklists and other interventions to markedly reduce catheter-associated bloodstream infections in ICUs throughout the state. For this work and more, he received a MacArthur Foundation Fellowship, and Time Magazine named him as one of the 100 most influential people in the world. We asked him to speak with us about checklists and other thoughts about the science of improving patient safety.

Dr. Robert Wachter, Editor, AHRQ WebM&M: Tell us about your new book.

Dr. Peter Pronovost: Safe Patient, Smart Hospital tells the story of my journey to get John Hopkins, a large academic medical center, focused on patient safety. I begin with how Josie King's death really impacted my organization and me personally. I recounted my father's death from a misdiagnosis and how that seemed to motivate me. And then I focused on the problem of catheter-related bloodstream infections and the trials and tribulations of how we were able to reduce those infections at Johns Hopkins, in Michigan, and elsewhere. In the book, what I try to make clear is though this idea of a checklist is a great compelling story, it's an incomplete story. What we found is that without measuring infection rates and empowering nurses to speak up—the rates won't go down.

The public doesn't really care whether we use a checklist or how we practice medicine. The end game that the public cares about, and I think that doctors and nurses care about, is making care better, reducing harm. What we found is—I think it's no different than why we see that wrong-site surgeries haven't gone down with the time out, or many other sentinel events haven't gone down with simple checklists—that to some extent Donabedian had it right when he was interviewed on his deathbed and asked what the secret of quality is. He said the secret of quality is love. Because if it's not in your heart and if you don't truly believe that this is the right thing to do, and there's a humbleness that I'm human, we're never going to make progress on infections.

In our work, the hardest part was trying to get the nurses comfortable speaking up and get the doctors to realize that it's okay if they make a mistake. They're allowed to be fallible, but they're not allowed to unnecessarily put patients at risk. So if the nurses see them not doing something that they should be doing for patient safety, then they have to speak up. But it's not about challenging them. It's about putting the patient first. I had an interaction with a surgeon about concern that a patient had preoperative pneumonia. The surgeon started screaming in front of the patient and staff that, "All you anesthesiologists want to do is cancel cases and you're going to make me late for clinic now getting this chest x-ray." Afterward, I told him I found him offensive. He apologized and said, "Well, I didn't mean to offend you, but in the heat of battle you have to exaggerate." I was struck and I said, "That explains a lot, because I don't see this as battle. I actually thought we were on the same team, that it was the patient's team." But the interaction was clearly couched in the context of a battle. I think far too often we've done that.

We just published in the BMJ that our Michigan results have now sustained for more than 3 years. It's really breathtaking. They stayed with that median of zero, with about a 70% reduction for 3 years. I can't make a causal inference of why the benefits were sustained, but when I speak to staff and ask clinicians what happened, uniformly they say it was the culture change. It was culture change on a couple of levels. First, they changed their mental model—now they believe that harm is preventable rather than inevitable. Second, they say we really changed from being adversaries to being on the patient's team. So we're okay questioning each other because we know it's just ensuring that we're doing what's right for the patient.

RW: You've spoken a lot about measurement being in some ways our way of making it clear that there are real patients who are being harmed if we don't do the right thing, and the importance of teamwork and culture change. Where does the checklist fit into all that? You've pushed hard to make it clear that it's not all about the checklist, but what is about the checklist?

PP: One of the key lessons that I've learned is that there are different types of problems in safety that require different solutions. I think for far too long we've approached it as if we have a hammer. Whatever your hammer is—if it's PDSA, if it's FMEA, if it's root cause analysis—everything's a nail. The problem is translating evidence or research into practice. That is, problems for which there are empiric studies that we should use certain interventions to improve certain outcomes, the checklists are marvelous in that. Because the way we summarize evidence currently is through guidelines, which are anywhere from 100 to 300 pages long and might have 80 to 150 conditional probabilities or if/then statements. And nobody can either read or do 80 things. Guidelines don't prioritize and they don't necessarily incorporate tacit knowledge. And when evidence is incomplete, they're often ambiguous. But as a practicing doctor, I don't have the luxury of being ambiguous. I have to make a decision.

Checklists are really helpful to summarize the evidence from guidelines into a handful of concrete, unambiguous behaviors for clinicians to do at one point in time. We're also learning that checklists can help somebody do a specific task that is linked in time and space. For example, on admission to the ICU do this, this, this, this. Or when inserting a catheter, use these five items on a checklist. We couple it then with really looking for barriers to doctors using that rule. I think the other reason for our success is that we approach this work with the mental model that doctors, nurses, and hospital administrators want to help and not harm patients. And if they don't, there's typically some barrier that's prohibiting them from doing it. So the supplies might not be available, or they may not agree with the evidence, or they may not know the evidence. For far too long we've bad-mouthed those who weren't complying, as opposed to saying, let me understand why it's hard for you, so we can make it easier. Now we get to a point that if everybody agrees with the items on the checklist and the nurse speaks up and the doctor wantonly says, "No, I'm not going to comply," I think we need to have stronger accountability. But we've had remarkable success with approaching this. Using a checklist and taking time to solidify what is to be done, but then making sure we address the systems and culture that make it possible to do it.

RW: How do you find the discipline to take 100 pages of guidelines and 500 references and distill it down into 5 things that you can put on a page?

PP: I wish I could tell you that I was more scholarly in choosing those things. I'm a clinician who works in the ICU, and I'm a PhD in Clinical Research, so I understand interpreting evidence. We took the guidelines, and we asked which of the items have the strongest evidence—so for the epidemiologist, the lowest number needed to treat. Which have the lowest cost? And which have the lowest risks or barriers to use?

One of the things we're working on now is being more quantitative in how you reduce guidelines to a checklist. And we're taking a novel view, tapping into communities of practice by using some Web 2.0 technologies to let both patients and clinicians vote on these issues. So we ask them to tell us, "If you had five items to pick to treat diabetes, or to prevent catheter infections, what would you choose?" We also ask them to tell us the strength of their belief that those actions will improve outcomes using something called a prediction market, almost like the stock market. I think we will make dramatic improvement. The stock market is a very efficient knowledge market, in that all of the information about a stock is known when you or I go to purchase it. But health care is our most inefficient knowledge market. That is, the knowledge is out there, but it's not necessarily in the hands of the patient when they seek care or the doctor and nurse when they're treating patients. As we have this explosion of 18,000 clinical trials a year, how do we make sure that when I go to get care or when I provide care that there's an efficient knowledge market? That I know what works best? I don't think that writing guidelines every 3 to 5 years is going to be the answer. It's too static, and knowledge is too dynamic. It fails to incorporate tacit knowledge. I believe that these communities of practice facilitated by Web-based tools will really help.

After our Michigan work got some attention, Congressman Henry Waxman held an oversight hearing on what the government is doing to prevent health care–associated infections, and as part of that he planned to survey all the states and ask them if they're using "The Pronovost Checklist." When I saw the survey, I said, that's okay to ask, but I think a more important question is, are they measuring their infection rates, and if so, what are they? And he agreed to add those questions. So they sent the survey, and not surprisingly all 50 states said, rah, rah, everybody's using the checklist, not a problem. But only 11 measured their infection rates, and none were anywhere near as low as the results we've achieved in Michigan. I think it's a telling story about accountability. There was an awful lot of push to regulate this checklist that we used, and I, perhaps ironically, strongly opposed that. I think regulations are too slow and too blunt to keep up with the science of medicine. And lo and behold, since our study, there's now emerging evidence that the chlorhexidine sponges likely needed to be added, because there's new evidence that we should be using them. But what we should do is require that hospitals monitor their infection rates. And then let hospitals innovate, let clinicians do what they do best and find out how to drive science and get the best results possible.

RW: How do you balance the tension between the role of putting the data out there and trying to motivate change, as opposed to being more prescriptive about how to make change happen?

PP: The big reason why we've seen such paltry impacts of pay-for-performance is that the measures they're incentivizing are often not meaningful to physicians. They're often collected using administrative data. Often, clinicians won't even get feedback of how they're doing. It comes at a hospital or an administrative level. With catheter-associated infections, we now have frontline doctors and nurses getting feedback about their infection rates and rolling that up to a hospital or a system or a state and now a national level, so that everybody's looking at the same measures. And most importantly, they believe it's an important signal.

RW: At some level, the data of thousands of events across a state with hundreds of hospitals are important, and at some level being able to tell the story about a single person who died, a small child with a mother (like Josie King) is equally important. How do you balance that? Is it with stories, or with statistical data and graphs and curves, or is it some combination?

PP: I think it's some combination. I use stories and estimates of lives saved to engage and get people motivated. When we start these projects we ask all these hospitals to tell their own Josie King story, and we have a little tool called the opportunity estimator, where they put in their infection rate and we spit out how many people died from it and how many dollars they cost. Now those estimates aren't super-precise, but they open people's eyes to say, wow, we had 5 deaths. But I don't think stories or estimated deaths are good for accountability, or evaluating progress. I think patients deserve better than that. Now understanding why things work, certainly stories or qualitative data are informative—especially for things that we're never going to measure as rates. Some of these are rare events, and the best we can do is assess knowledge or compliance with some protocol. But it still has to be data based. The way I separate these is that stories engage people in the work; the data has to be what we hold ourselves accountable for. They serve different purposes. Both are needed.

RW: We just passed the 10-year anniversary of the IOM report and thus the 10-year anniversary of the safety field. What's your assessment of how we've done and what's surprised you the most?

PP: I think that there's now a much broader awareness that safety is a problem. It's talked about a great deal; the press is engaged. Most of our training programs are aware of it. Yet what is so humbling is that the empiric evidence that outcomes are better is virtually nonexistent. We are so quick to say, patients are dying, we must go do something. And we squandered a lot of resources, often with little evaluation and little learning, to say how do we do this? This is really hard work. We have to hunker down. It's going to take a large investment in science—in human factors engineers and economists and psychologists and clinicians and health services researchers. It's not going to happen just by encouraging people to do better. So I think the stage has been set now that there's been awareness. I hope the conclusion isn't that for most measures of safety we don't even have a clue of how well we're doing. To put it into perspective, even this work we're doing on health care–acquired infections, the estimate is that about 100,000 people die from those infections a year. That's about the same number of people that die from colon cancer. It's more than twice the number of people that die from breast cancer. And we don't have the kind of rigorous investment that we need to really make things happen. So, hopefully in the next 10 years, we'll see the role of science in safety really being appreciated, and then improved.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Related Resources