Editor's note: Dr. Chassin is president and chief executive officer of The Joint Commission. He is also president of the Joint Commission Center for Transforming Healthcare, a center he began to promote high reliability and transformative practice. He was a member of the committee that authored the IOM reports, To Err Is Human and Crossing the Quality Chasm. We spoke with him about new thinking in high reliability.
Dr. Robert M. Wachter: What got you interested in high reliability as a central theme of your work?
Dr. Mark Chassin: Back when I was on the hospital side struggling with quality and safety problems, it always seemed like we had to almost invent a new solution for every problem, whether it was pressure ulcers, falls, or whatever. That's because the interventions recommended by available practice guidelines, best practices, stories, or articles never really seemed to work as well as the developers claimed or as well as the published papers said they should.
When I got to The Joint Commission, the board was interested in what else The Joint Commission could do in addition to accreditation to move the delivery system further and faster toward consistent excellence. Every developed health care system around the world is struggling with exactly the same quality and safety problems, so it became clear that we had to look outside health care. There's literature and a lot of analysis of high reliability organizations—organizations that deal with serious hazards all the time but have somehow managed to have exemplary safety records maintained over long periods of time.
We put a team together to absorb everything we could from academics and from practitioners in high reliability industries—about how they work and their secrets to success. It became very clear that you couldn't take what high reliability organizations do and force that on health care today. We had to marry what we were learning from all of that work with what we know from our very deep understanding (we do 13,000 surveys in the United States every year) of where health care is falling short.
That marriage of The Joint Commission's understanding of health care with high reliability organizational learning led us to propose three major domains of change for how health care could get to high reliability. It starts with leadership commitment to getting to the ultimate goal of zero harm. The second is safety culture. The third is: How do you get from 50% failures down to 1% or 2% or 0% failures? That's with Robust Process Improvement (RPI), Lean Six Sigma, Change Management. My late colleague, Jerod Loeb, and I wrote a paper 3 years ago laying out this framework in detail and it's really caught fire.
RW: You said one of the early lessons was that you couldn't easily translate these observations from high reliability industries outside of health care to health care. What are the biggest obstacles to that translation?
MC: For example, one of the most important mechanisms for high reliability organizations staying safe is that they first of all have safety processes that are nearly flawless. But every process has flaws. One of their critical mechanisms for staying safe is that every single person who works in the organization at every level knows they have a critical role to play in safety in that organization. They are constantly looking at every process that they're engaged in as part of their daily work to be sensitive for small deviations—small signals that the process isn't quite working right, that the safety protocol doesn't quite fit the situation I'm looking at right now. That a machine maintenance program wasn't quite designed to do x, y, or z, and that's the situation I'm in right now. They find these unsafe conditions when the problems are very small—way upstream from posing serious risk to the organization.
RW: Are you saying that these non–health care organizations do that and they're better at safety and high reliability, or their world of defects and potential defects is just less broad, the work is less complex than that of health care?
MC: There's no lack of complexity. What they have created, though, is a culture and an operating style that sensitizes every single individual to the need to scrutinize their environment and their processes so they recognize problems when they're very, very small. They don't have serious breakdowns. They don't have adverse events. They have very few close calls, and when they do have a close call, they study it just as if it were a real adverse event in order to understand how this particular error sequence got so far down the road to almost causing harm.
Contrast that set of activities to health care organizations, hospitals, or ambulatory surgery. Today, because our clinical care processes are so fraught with weaknesses and failure points, most of our health care workers, especially the frontline workers, are working around those problems every day and trying to do the best they can to take care of patients. But because we often make it hard for them to do the right thing, they're always breaking protocols. They often don't even recognize that they're engaged in a workaround, which is itself unsafe, because that's their daily work.
Further complicating our situation in health care is that health care workers in virtually every organization are surrounded by behaviors that suppress reporting or punish reporting. Not necessarily in severe ways, but through the kinds of behaviors that frontline caregivers perceive as disrespectful or intimidating. It can be something as simple as an impatient tone in response to a question. "Why are you asking me that? Just do it the way I said." Those kinds of exposures not only make the workplace unpleasant, but they themselves lead to unsafe practices.
The Institute for Safe Medication Practices has a wonderful workplace intimidation survey that they've fielded twice, in 2003 and 2013. They tabulated about 2100 responses in 2003 and almost 5000 in 2013. They assessed the types of behaviors that frontline caregivers perceived as intimidating: impatience with questions, condescending tone. When we first wrote about these behaviors in 2008 in a Sentinel Event Alert, we called them "behaviors that undermine a culture of safety." The media, both trade and otherwise, characterized it as a problem of disruptive physicians. The caricature is the surgeon having a tantrum in the OR and throwing instruments. Well, that certainly is bad behavior, but it's very rare. Much more common are these other behaviors that rise to the top in terms of frequency. So, the caricature is harmful because it draws attention away from the most common intimidating or disrespectful behaviors. It's also harmful because it implies that physicians are the only ones who engage in those behaviors, and that's not true. Both of these key aspects of intimidating behavior are measured in this survey.
The other thing this survey did is to ask about unsafe practices that frontline workers were adopting because of their experience with these disrespectful behaviors. One of the questions was: Have you ever assumed a medication order was correct despite a concern you had because you didn't want to talk to a prescriber who yelled at you the last time you had a question? Did you ask one of your colleagues to talk to that doctor because they yelled at you the last time you had a question? And the percentage of the people who said yes is astounding, from a third to a half of respondents. When you compare 2003 to 2013, the progress was miniscule. The bottom line question was, "Agree or disagree: my organization deals effectively with disrespectful behavior." In 2003, 61% said no. In 2013, a decade later, 56% said no. High reliability organizations don't have that problem. Their culture is completely different.
In addition to the problem of disrespectful behavior, another characteristic of high reliability culture that health care doesn't demonstrate well is holding everybody accountable for consistent adherence to safe practices. We've often heard the phrase, "What health care needs is a blame-free culture." Well, no high reliability organization on the planet has an entirely blame-free culture. What they have is a very clear, very transparent way of discriminating between blameless acts and blameworthy acts. Blameless acts are very common because we put people in broken processes where it's hard to avoid errors. We need to know about those situations because they point directly to weaknesses in our safety systems. But there are also blameworthy acts, either patterns of errors or patterns of behavior, such as the serial violators of hand hygiene protocols. If you don't have equally transparent and equitable ways of judging and moving those situations into disciplinary procedures calibrated to the recklessness of the act, then you also don't have a fully developed safety culture.
RW: Talk about The Joint Commission in this regard. Your decision to move the organization in this direction was a big departure from the traditions of the organization and makes the accreditation work much harder, I would think. It's easier to look to see whether you're implementing standard process and much harder to assess culture and leadership. How did you sell that, and how has it all worked out?
MC: The Joint Commission Board was convinced that accreditation by itself wasn't going to get us to where we wanted to go in safety and quality improvement. We created the Center for Transforming Health Care as a separate part of the Joint Commission where we housed the high reliability work and the direct application of Robust Process Improvement, Lean Six Sigma, and Change Management. The goal was to try to solve some of these intractable quality and safety problems—hand hygiene noncompliance, wrong-site surgery, handoff communication, falls. That gave us an additional platform to appeal to organizations that wanted to do more and go faster, further toward higher quality.
It's also informed our accreditation program. If you go back to the roots of The Joint Commission 100 years ago, the goal has always been to improve quality and safety. But our accreditation program can't do that directly, because we don't take care of patients. We achieve our mission only when we create very credible, highly evidence-based standards of quality, send expert surveyors to ferret out opportunities for improvement, and communicate that information effectively, collaboratively, and educationally with health care organizations so that they take that information in and use it to drive improvements in the way they provide care.
The reorientation started with a reframing of our mission in 2009. Our mission today is to improve quality for the public by evaluating health care organizations and inspiring them to excel in providing the safest, highest quality, best value health care. Together with our RPI program, that reorientation has driven our accreditation processes to be much more efficient and effective; it has really driven growth like we've never seen before and the building of positive relationships with our customers. This whole approach of engaging customers and giving them the tools and information they need to drive improvement has worked in our high reliability programs to identify, foster, and recognize excellence and to improve our accreditation programs.
RW: The majority of the organizations you accredit now are computerized. How has that changed the work and how has that changed the process of assessing the quality and safety of an organization?
MC: Well, information technology has always been around. It's obviously gotten more complicated, and with electronic health records in particular spreading rapidly, the opportunity to do it badly has been something we've seen commonly. We have trained our surveyors in the most common electronic medical record systems. But they're focused also on the human–electronic interfaces where a lot of the opportunities for error occur. We found too many instances where technology including EHRs, IV pumps, and monitoring systems don't talk to each other and other kinds of automation where broken processes have been automated, which creates unsafe situations for patients really quickly. We've written about that in our Sentinel Event Alerts. We also had a contract with ONC to look at the sociotechnical causes of failure that automation of various sorts introduces. We published that report a couple of years ago. And it goes back to the aphorism that I learned a long time ago from a very wise CIO who said, "Computers don't make us less stupid. They make us stupid faster."
RW: When you think about the cultural dimensions, one thing we're increasingly hearing from workers is that they're burned out. Part of that appears to be from being asked to do many new things as we're trying to improve quality and safety. Do you have that concern as well? Are your folks seeing that in the field—that even people who believe in the tenets of high reliability are burning out, and it's causing its own new safety and quality problems?
MC: Absolutely. The way I think about that phenomenon from a quality improvement perspective is that for the last 15 years or so, we have put an enormous amount of effort into improvement. Before that, those of us involved in quality and safety would sit around and moan and groan about all the evidence that there were quality problems but there was no serious effort to solve the problems. Now we don't have that problem. We have made some progress in all of this, but all of that effort hasn't gotten us to zero harm and we still have very highly visible failures. So why, with all this effort, have we only seen modest success?
To your point about burnout, I think one of the reasons for it is that we have tended to see progress only project by project. You have to have a project on central line infections, a project for high-risk medications like anticoagulants, and a project for falls. How many projects can you keep rolling at any one time? This "project fatigue" contributes to burnout. Analogizing to high reliability again, that's not how high reliability organizations stay safe. They stay safe because zero harm is the natural byproduct of the way they do their work every day. That's the transformational part of the high reliability journey. We need to get health care to the point where zero harm is the natural byproduct of the way we take care of patients every day.
That happens only when all three major change domains (leadership commitment to zero harm, high reliability/safety culture, and RPI) are tackled by health care organizations. You empower employees to identify problems that you then focus on and solve, and they see that their observations are a direct link to improvement. You use highly powerful tools, such as Lean, to reduce waste. If you look at what nurses do, for example, when they are running around, going to five different places, collecting everything they need to change a dressing, all of that activity is waste. Multiply that by 10,000 and you get the estimate of how many nurse hours are consumed by wasted effort that could be eliminated by process improvement. That's a part of the burnout. Solving those problems with Lean, improving the outcomes of processes with Six Sigma, and then wrapping technical solutions, sets of tools, and a systematic approach to change management around them, so the organization accepts and implements the changes needed to get the maximum benefit out of the improvement—that's the RPI component. All of that builds into a very positively reinforcing set of cultural norms and traits that over time shows people that you can solve your own problems. You don't need a whole raft of consultants.
In fact, you can even generate a positive return on investment if you apply these tools to the business processes that frustrate you: ED throughput, supply chain management, revenue cycle. Mayo has a great paper that demonstrates that their version of RPI has consistently produced a 5-to-1 return, very conservatively scored by their finance department. We're now teaching hospitals and systems how to use these tools as part of transformation, embed them throughout their organization, and make sure that everybody is trained and engaged in improvement. You get closer to the high reliability culture where everybody is acting on opportunities to improve and bringing resources to bear on the smallest problems before they get out of hand. That changes their culture. It changes the way they approach their work and they see themselves as part of an overarching effort to get to zero harm.
RW: You were one of the first national leaders in quality and safety about 30 years ago. What has surprised you about the trajectory? What's gone faster or slower than you thought?
MC: Nothing has gone faster than I thought, except the enthusiasm. There was, for a while, a pretty rapidly spreading enthusiasm for improvement. The magnitude of improvement has not gone as fast as I would have hoped. That said, we realized from the beginning that we cannot expect that health care will follow a similar trajectory as automobile safety or as consumer electronics improvement. When Sony releases a highly reliable electronic product in the American market, everybody sits up and takes notice, and everybody buys Sony until American manufacturers produce the same quality. The same happens with Toyota and Honda. That can't happen in health care because of how intensely local health care is. It doesn't matter how highly reliable a product UCSF produces, that's not going to affect Mount Sinai's business in New York or Truman's business in Kansas City or anybody else's. The learnings about how to do this really well have to spread by other means.
One of the difficulties we have is a very high level of skepticism on the part of health care executives that anything from industry will help us a lot. However, what we are seeing is that, for well-applied RPI (incorporating Lean Six Sigma and Change Management), overwhelming evidence is beginning to accumulate that these tools work far, far, far better than anything we've tried before. That pace of change has to pick up. It's disappointing that it hasn't spread more rapidly.
RW: Do you think part of the lack of speed is the incentive system not driving this hard and fast enough?
MC: No, I don't think that payment matters a whole lot. Because it's easy to demonstrate that no matter how you're paid, if you understand how you're paid, you can focus these tools in ways that will improve quality and save you money. You can do it on the administrative processes. You can do it on clinical quality where the incentives align, and that might be different from somebody who was getting risk-based payments as opposed to somebody who's only getting fee-for-service payment.
This is a great example. One hospital that works with us is Wentworth-Douglass in the Seacoast of New Hampshire. It's a medium-sized, 170-bed hospital. They have a great RPI program. Their CEO, Greg Walker, has been a leader in applying it. In 2012, they opened a new wing that had 130,000 square feet of space that needed to be cleaned. He went to their environmental services department and said, "Now you have another 130,000 square feet on top of the 360,000 square feet that you're already doing today. Here's your challenge. I want you to figure out a way to incorporate this new body of work into what you do now, but you cannot hire anybody else. You cannot spend any more money on temporary help or overtime or anything else and you cannot let the quality of the work slip by a millimeter. It has to be exactly as good as you're doing it now, which is great." They used RPI, redesigned their work process, met the challenge, and saved the hospital more than $400,000.
So you can use these tools to effectively address safety and quality problems and at the same time—they are that good—generate a positive return on the investment in learning the tools by using them to solve problems like these at the same time. The combination can easily produce a positive return on investment. The business case is compelling. One of the key obstacles is that most hospital executives don't have any experience with these tools. They don't learn them. They don't have cross-fertilization from industry that they trust. Many of the ones that have been around a while saw the previous generation of industrial tools, like TQM [total quality management] and CQI [continuous quality improvement], crash and burn in health care. And it's sort of like, "This too will pass and we'll wait until something that really is good for health care comes along." But we cannot wait anymore. The imperative to improve today is overwhelming.
RW: Anything else you want to talk about?
MC: Hospitals and systems, a small but growing number, that have taken this challenge for high reliability on seriously would tell you that everything changed when they made this decision—I love the way Dan Wolterman, who just retired from being CEO of Memorial Hermann, phrased it. He said, "Safety is our core value and it's our only core value." When you take on this goal of zero harm, quality becomes your number one strategic priority. And there's only one number one strategic priority. If you pursue it in that fashion, what they've found is that every other aspect of what they want from their work—whether it's financial performance, employee satisfaction, reduction in employee turnover, patient satisfaction, market share—all of that came along when they successfully pursued the quality goal.
Also, zero harm is not just getting rid of complications. That's certainly an important part of zero harm. It's zero harm for patients. It's zero harm for caregivers. It's also zero missed opportunities to provide effective care. And it also means zero instances of providing no-benefit health care services, like antibiotics for cold or imaging for simple back pain. And it's not zero preventable harm. It's really looking over the horizon and saying ultimately we can get to zero harm. Because what we thought was not preventable 5 years ago, today is preventable. And what we think may be not preventable today, in a year or two will be preventable. That's where high reliability organizations are. They're never satisfied—even if they're best in class—with where they are at any one point in time. If they aren't at zero harm in their most critical processes, they keep going until they get there. Oh, and by the way, there's no letup when you get to zero. It takes almost as much effort to stay there as it did to get there.