Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation with…Peter J. Pronovost, MD, PhD

June 1, 2005 

Editor's Note: Peter J. Pronovost, MD, PhD, is Medical Director of the Johns Hopkins Center for Innovation in Quality Patient Care. A practicing anesthesiologist and critical care physician, he has appointments in both The Johns Hopkins University School of Medicine and its Bloomberg School of Public Health. Dr. Pronovost's research, which has focused on how to improve patient safety and quality in the ICU setting, has been characterized by a blend of methodologic sophistication and practical attention to the details of making change happen and making it stick. His many contributions include studies of the value of intensivists, of the use of daily goal cards on safety and communication, of an executive adopt-a-unit strategy, and of a comprehensive unit-based safety program. For this work, much of which has been supported by AHRQ, he was awarded the John M. Eisenberg Award in Research Achievement in 2004.

Dr. Robert Wachter, Editor, AHRQ WebM&M: What inspired you to do this kind of work?

Dr. Pronovost: When I was a fourth-year medical student at Johns Hopkins, my dad died from a medical mistake—his cancer was misdiagnosed—and then his pain was terribly undertreated. Despite my objections, his doctors told us that this was the best they could do. We all knew that it wasn't. I think patients deserve more.

I was trained in anesthesiology and critical care and did a PhD in clinical research. I quickly realized that there was really a shortage of physician leaders who understood quality and systems, and that—if we wanted to have an impact on patients—we didn't necessarily need to discover a new gene; we could also make sure that we were delivering the therapies that we know work efficiently and effectively.

RW: What differences do you see with how physicians look at quality and safety vs. how quality and management experts do?

PP: My goal [in simultaneously studying clinical research, health policy, and management] was to combine those two cultures—quality improvement and clinical research. At the time, there was no program for someone to learn how to lead quality and safety efforts, because these involve not only epidemiology, biostatistics, and evidence-based medicine, but also change and leadership—all that "softer" stuff that is typically not in a traditional clinical research program. (In fact, Hopkins has now created a part-time doctoral degree program in public health explicitly designed to teach someone how to measure and improve organizational performance.)

There is a dichotomy—some quality experts are far removed from the bedside. Their theories, which are often very elegant, are just not always applicable to bedside reality. One real benefit that physicians working in this area bring is that we still practice at the bedside, so the interventions are very well grounded in reality. Indeed, I test interventions first in my own ICU and my clinical practice. If they don't pass the "sniff test," I would never recommend them for broader use.

RW: Some of the changes you have been able to implement at Johns Hopkins are extremely impressive. Hopkins is often ranked as the nation's top hospital in US News and World Report's yearly rankings. What about Hopkins made this kind of work easy, and what made it particularly hard?

PP: When we started this work, the safety and delivery of care were often not viewed as science. Science seemed to include understanding disease biology and identifying effective interventions but not ensuring patients received those interventions; this work was viewed as the art of medicine. What I tried to do was to highlight to our hospital and medical school leadership that there is science in the delivery and that we often do this science poorly. As a result, patients suffer harm. By exposing that, we revealed the dissonance between our pride and our belief that we are a great institution and the reality that some people were being hurt by adverse events, or were not having the best outcomes or receiving evidence-based therapies. Through what admittedly was a bit of a risky strategy—discussing sentinel events with our CEO, department chairs, and board—and making the dissonance real to them—the institution was galvanized into realizing the need to apply science to the delivery of care, just as we apply it to everything else. The delivery of care is really a learning lab for safety and quality. We continually try to evaluate, in a rigorous way, how we are doing things and how we can do them better.

RW: What factors do you think helped your institution go in the right direction? Are there lessons there for other institutions?

PP: There were a number of important factors. The first was that our institution had several adverse events—one involving a healthy research subject, another involving a little girl—that were very public, in the press, describing our shortcomings. The institution was both forthcoming and shaken by those events, and that created a readiness to change. To the institution's credit, there was a humility that followed these tragedies. Rather than resting on our laurels and saying well, we're great, those were just rare events, people really took them to heart and said that these kinds of tragedies are not acceptable. What are we going to do differently? That humility led the institution to be receptive to ideas about some ways we could try to fix this problem.

I also think that we had great senior leadership support for this. Very early on, both our university president and our hospital CEO saw that we ought to do better and that, in keeping with our mission, we have to do better. Their support really gave the protection that was needed to speak openly about this. The focus was laser-sharp on improving the quality of care and protecting patients from harm, not on cost-cutting or political agendas. Because of that, we could unite around a shared value—that we in health care are here to learn and serve.

RW: You have worked with different kinds of institutions, big and small, academic and not. What lessons from your experience with Hopkins are generalizable? How would a place that lacks the same kinds of infrastructure make such changes?

PP: Organizations need to move beyond just doing projects to having a strategic plan of how to cross this "quality chasm" and fundamentally transform. That plan needs to be structured enough to provide a roadmap for the institution, but loose enough to defer to the ways of local workers. It is vital that we get enough local input so that we can tap into their wisdom and experience.

To transform organizations, leaders need to target three groups of people: senior leaders, project team leaders, and front-line staff. Each of them has to go through various phases. The first phase is engagement and a genuine search for the answer to the question: How is this making the world a better place? The next phase is execution, which begins by being very clear about the plan. The senior leader at that level has to make sure that resources are available to do the work. And the project leader needs to make sure that the staff are aware of the evidence for change and implementing the plan. Finally, the leader and team need to evaluate what's done, to answer the question, "How do I know I actually made a difference?" Here, the leader has to make sure that there is a measurement system, with executable plans to collect the data.

RW: What led you to realize how scientific measurement might fuel change?

PP: It has been an interesting journey. I began with strong roots and training in methodology and clinical research, then moved into quality and safety. Much, perhaps most, of the efforts in that area were weaker on research methods in terms of rigorously applying the standard principles of measurements and data collection that I had learned. As T.S. Eliot said, "And the end of all our exploring will be to arrive where we started and know the place for the first time." For us, this has meant that we've come around to rediscover the need for rigorous data collection—that this quality and safety work is indeed research. It's a different kind of research, studying the delivery of care, but it ought to be just as rigorous as methods used in classic clinical trials.

Even in community settings, where you might not think the data are as important as in an academic medical center, we have learned that doctors and nurses want to be presented with valid scientific measures. This is hard to do, and it often is not realistic for a single academic medical center to do it, let alone a community hospital, but through funded projects or collaboratives you can pull together the resources to do these studies and then hopefully share them broadly. It takes significant resources to develop tools to engage caregivers, create measures and data collection tools, develop interventions, and package these into a scalable product.

RW: How do you deal with the disconnect so often seen between the programs being developed by administrators and quality leaders, and the front-line workers? I've certainly seen institutions where the patient safety officer described wonderful programs, and none of the doctors or nurses knew they existed.

PP: One of the lessons in overseeing quality and safety at Hopkins is that our center could never be large enough, the budget never big enough, to transform safety and quality. As a matter of fact, the goal is to put my team out of business—to shift the work to the caregivers at the front-line. Then, perhaps the Center's job and my job will migrate simply to creating the data collection systems to support their work, but the work itself has to live in the hearts and minds of our doctors and nurses who are treating patients. In our efforts in Michigan and in other states, we have found that when you tap into their wisdom and help them understand why this is important, by providing the evidence and the tools to improve along with a valid measurement system, they are completely engaged.

RW: In terms of big buckets—informatics, culture change, hiring more nurses, standard procedures—how should institutions prioritize their safety efforts?

PP: Unequivocally, they should start with culture change. We have found that without culture change, you cannot reorganize work or implement safety practices, because people are not playing in the sandbox together. We have this tool called CUSP, our Comprehensive Unit-based Safety Program, with enough structure to disseminate throughout an institution, but flexible enough to defer to local wisdom. It simply says, measure your culture (we use J. Bryan Sexton's tool), educate on the science of safety, and then ask your front-line workers how they think they are going to harm the next patient and how that harm could be prevented. It is a very powerful exercise. Assign a senior leader to "adopt" each unit. He or she needs to review the staff's concerns about how they are going to harm patients, and then commit to learn from one defect a month and implement one teamwork tool every couple of months. We have learned there were many ways of identifying defects: M&M conferences, incident-reporting systems, liability claims, and what staff members tell you. Yet we also learned that despite all this, we often fail to learn from these defects; mistakes recur. The point is to start closing the loop on these defects. We think the CUSP tool really helps—once people get started, they really run with it.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Related Resources