Editor's note: Professor Sutcliffe is a Bloomberg Distinguished Professor of Business and Medicine at Johns Hopkins University with appointments in the Carey Business School, the School of Medicine (Anesthesiology and Critical Care Medicine), and the School of Nursing, and Professor Emerita of Management and Organizations at the University of Michigan Ross School of Business. She studies organizational adaptability, reliability, resilience, and safety in health care. We spoke with her about high reliability in health care organizations.
Dr. Robert M. Wachter: Tell us a little bit about your background and how you became interested in high reliability.
Dr. Kathleen M. Sutcliffe: I had a general studies background and a second bachelor's degree in nursing. I practiced in the community, and then got a master's degree in nursing from the University of Washington in health care administration and community health. I was very interested in understanding why do systems work or not work, so I went to get a PhD. Rather than going to a school of public health, I chose to go to a business school, in part because I believed that business seemed to be much farther ahead of public health, health care, and medicine in understanding how organizations can be designed to have better outcomes. When I started graduate school in Texas, Karl Weick was there, and I was very attracted to his ideas.
When I got a job at Michigan, I had known that Karl had done his work on organizational culture as a component of high reliability. We started discussing how this idea of high reliability was probably more generalizable than had been thought. And so David Obstfeld, Karl, and I worked together on our initial paper in 1999 about collective mindfulness and high reliability. Until that time, the literature on high reliability organizations had not widely diffused into the management literature; it often occupied a niche, particularly in the domain of organizational crises. W. Richard Scott, a sociologist at Stanford, had raised the issue: Why is it that the literature on highly reliable organizations hadn't more widely diffused into the management literature on organizational performance or organizational learning? So that 1999 piece gave the high reliability organization domain renewed energy to go in different directions. Around that time, those kinds of ideas started becoming more visible to people in health care.
RW: So at that time, it wasn't so much that there was a deep literature on high reliability in non–health care organizations that you translated. Really it was that literature even in non–health care was fairly sparse and more around crisis management, Tylenol, etc. rather as a way of doing work and setting an organizational culture.
KS: I would characterize the literature more around a particular set of organizations, ones that Charles Perrow had talked about as being tightly coupled and interactively complex. The studies of high reliability originated in the 1980s, as a contrast to Perrow's normal accident theory. And when I describe the differences between those two ideas, I think about it as the glass is half full. We have these tightly coupled, interactively complex systems that some researchers at Berkeley were studying, and they realized that these systems were remarkably high performing despite dynamic complex conditions that didn't let up. I wouldn't say the research was about crisis management so much as about particular high hazard organizations: naval aircraft carrier operations, nuclear power plants, submarines, the air traffic control system. The researchers at Berkeley investigated why those organizations were performing remarkably well. They came out with a set of ideas about what contributed to that high performance. When Karl, David, and I got together and started our work, we wanted to understand those organizations in a different way.
RW: I imagine people looked at error-prone health care organizations and had a reasonably na?ve, linear understanding that if you could just engineer and standardize these processes and make it a little more like an assembly line and apply algorithms, you could fix them. How would you characterize the more na?ve understanding of how complex organizations work and what you came to understand yourself?
KS: Well, you raise a really important issue. One of the things that we do not discuss a lot (because we pretty much take it for granted) is that these high reliability organizations operate under a logic of anticipation and prevention. They do a lot of standardization. They are very concerned with policies and procedures. They are really active at revising those policies and procedures, provided they have actual evidence and learning that the procedures or policies and standards need to be revised. So they do operate on that, but we also found in these organizations that it wasn't just invariance that contributed to their reliability. It was that they were making adjustments in real time to be able to handle emerging kinds of discrepancies, disruptions, and variations. That was something that had not been written about. When people first hear about this idea of reliability, they generally think of the engineering notion, which equates reliability with a lack of variance in performance.
What we found in our 1999 paper was that reliability wasn't the outcome of organizational invariance. Instead it resulted from a management of fluctuations. Because small things are going to come up, so you have to adjust. If you're resilient, you're making adjustments to try to adapt and to try to keep your performance within a particular range. So there are at least two logics operating: anticipation/prevention and resilience/containment.
RW: So there's an obvious tension between top-down and bottom-up and where the authority lies in an organization. I imagine that resilience involves a whole lot of authority that lives on the front line because the top layers of the organization cannot be nimble enough to manage the variations in real time. On the other hand, if there's chaos and everybody's doing their own thing to react effectively, how does the organization know what's going on and modify its policies and procedures to try to decrease the variance?
KS: There are a lot of ideas in what you raised, but you're right. I think it takes a combination of top-down and bottom-up, and that's why we talk a lot about organizing processes. I see the literature moving away from talking about high reliability organizations toward talking about high reliability organizing. The idea is that these high reliability organizations create a set of daily routines, habits, or practices that are aimed at a particular set of principles, and it is through the enactment of these practices daily that you create an alert and aware organization that can handle what's in front of it. Because the world is not going to unfold the same way tomorrow as it did today. Every day is unique. What our error rate was yesterday or our harm rate was yesterday really has no bearing on today. In fact, knowing that we've been on this good run sometimes contributes to less vigilance rather than more.
I have mixed feelings about keeping track of stuff like that on a daily basis, even though I know that, in Lean, keeping track of defects is really important. I gave a talk about my concerns about keeping track and putting it on health care organizations' websites about x days without an accident, and all that kind of stuff. I said sometimes I worry that people aren't going to report because they don't want to screw that up. In fact, somebody came up to me at the break and said "I fell over something the other day." The person didn't want to report it. So I leave that there.
RW: Walk us through the differences between individual mindfulness, the idea of taking your own pulse in a crisis, and organizational resilience and how teams work. Sometimes these issues get conflated.
KS: Well, they're connected. Tim Vogus, Erik Dane, and I recently wrote a review paper on mindfulness in organizations. We were looking at individual mindfulness and organizational mindfulness and connecting those literatures. I don't know that we have an answer yet as to whether individuals being more mindful leads to higher performance in an organization. If you have a lot of highly mindful people, is your organization going to be more mindful? It seems logical, but the jury is out. But I will state that the way people go about their work does matter for organizational mindfulness. For example, we talk a lot about people interrelating heedfully or relational coordination—understanding what's happening upstream and downstream from what you're doing, and how important that is.
I did a study once of medical mishaps. One resident said that he had a patient once who was a 16-year-old diabetic who had a horrible infection and she was admitted to the hospital. She went up to the floor quickly from the emergency department. They weren't able to start any antibiotics because it was that fast. He said that when he examined her in the room he noticed that there were some IV antibiotics at the bedside, but he didn't start them. He went out to the nursing station, there were some new lab results in, and he realized he needed to write a different antibiotic and did. The upshot was that the patient did not get any medication for 16 or 20 hours because for some reason that medication order didn't get picked up. It was before EMRs actually. He was doing his job, but he wasn't looking out for what's happening upstream that he needed to connect more with the nurses. And I don't know why the nurses didn't connect more with him; but the system was heedless twice over.
RW: I'm sure you get asked this question a lot. I'm running an organization or I'm running a unit and I want to become more highly reliable. I have a pretty good budget. I have a year to do it. Please tell me what I should do.
KS: I would be thinking about the organization at multiple levels. I would want to understand where the pockets of really good performance are in my organization, and where are pockets that are not performing as well as the others. You want to pay attention to what things look like right now. Because if we know how things are operating right now and where our vulnerabilities are, we can make small adjustments to current operations. Daily are we taking actions to try to enable people's development and increase their response repertoires, because that's one way to build resilience. And paying attention to where the expertise is in our organization is important as well—so we can draw on that expertise when problems crop up. Creating a climate/context of trust and respect is absolutely critical because that's going to enable shared understanding and shared knowing. It's going to fuel people's voices and it's going to disable silence. Rudeness in medical settings is really disabling. A great study published in Pediatrics showed that rudeness—not just rudeness to a specific individual, even if you are in the vicinity of somebody being rude to somebody else if you're a bystander—affects your cognitive functioning and can affect your performance. So building a climate of trust and respect is really important. The other thing that is critical is this issue of cross-boundary interactions and really understanding what's upstream and downstream from your task, your job, and doing your job taking that into account.
RW: Have you seen an organization in health care that you would characterize as being a high reliability organization?
KS: I think that if any organization would say it's a high reliability organization that it probably is not. That's why I don't like to even use the term high reliability organization, because I think a high reliability organization doesn't even know that it is a high reliability organization. Karlene Roberts, Todd LaPorte, and Gene Rochlin would probably claim that the organizations they studied would never have called themselves high reliability organizations. They as researchers coined that term, not the organizations. Gene Rochlin said, these are organizations that seek perfection but they know they're not going to achieve it. That's really the big takeaway: you don't get it behind you. Every day is a new day and you have to continue working because it's a long, hard road and it's a lot of work. And it may get easier, but it may not.
RW: Or the minute it seems easy is the day you're dangerous again.
KS: Exactly, right. The day you lose your vigilance.
RW: When I think about measuring high reliability and resilience and recovery, the measure that comes to mind first is failure to rescue. What do you think about that as a measure of some of these constructs?
KS: I think a lot about failure to rescue. I like to think about the processes of rescuing. In fact, I think about high reliability organizing as a means for rescuing. I think rescuing is where the action is going to be for the future of patient safety. I'm not saying there's no room for more technical advancements, interoperability, and all those kinds of things. Those are critical. I'm not sure what dashboards are going to do for us. I'm not saying that they're not important, but the map is not the territory. We've been doing this standardizing thing. There's definitely more room for that. We need to think more about which tasks are repeated a lot and are more routine and really think about standardizing that way. That's kind of a basic 101 management and organization theory idea that somehow health care never really adopted until just recently. But the action is going to be in thinking about organizing practices and ways to enable the alertness, awareness, and capabilities on the front line. Those kinds of things are really going to contribute to rescuing and making health care safer.