Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation With… Sidney Dekker, MA, MSc, PhD

September 1, 2013 
Save
Print

Editor's note: Sidney Dekker is Professor and Director of the Safety Science Innovation Lab at Griffith University in Brisbane, Australia. He has written several bestselling books on system failure and human error, including Just Culture, now in its second edition, and Patient Safety: A Human Factors Approach. His latest book is Second Victim: Error, Guilt, Trauma and Resilience.

Dr. Robert Wachter, Editor, AHRQ WebM&M: What got you interested in patient safety?

Sidney Dekker: The fascinating thing for me has always been the vast difference in safety performance across safety-critical industries. Not just in performance outcomes—the proverbial body count—but rather how that performance is arrived at. What assumptions they make about competence, training, where they recruit their practitioners from, and how they keep them competent. To see those differences play out, health care is almost an outlier. It makes vastly different assumptions about competence, how it is attained, how it is retained, how it is to be checked, and how it is to be trusted. Compared to the other fields in which I have worked, which include aviation, oil and gas, mining, process control, shipping, and rail, health care seems to have fantastic assumptions about actor autonomy, about the right to practice and innovate as you see fit because of the assumed complexity, diversity, and unpredictability of the problems you face. So, discovering and working with that has been really interesting.

RW: In many ways, professions begin with a sense of craftsmanship and autonomy and over time become industrialized. There is always blowback from leaders who don't want their autonomy stripped away. This happened in aviation in the 1970s and 1980s, when pilots resisted the idea of the importance of teamwork and dampening hierarchies. Ultimately they realized it was crucial for the safety of their passengers, but also themselves. Medicine has just been a laggard in that. The argument goes: there's nothing fundamentally different about health care in this regard, we've just been sluggish. Do you buy that, or do you think it's something very different?

SD: In a sense yes, that is the narrative. However, I would not agree with the assertion that there's nothing fundamentally different about it. The problem with making assertions and comparisons like that is that we take health care to be a monolithic thing. It isn't. Yes, there certainly are islands of stability and relative predictability in which it is unethical not to use standardized procedures or checklists. Outside of that though, health care has areas of profound unpredictability, diversity, and real unknowns that are much larger than other worlds. Airline flying, yes there are still some accidents that surprise us. But, essentially, the Newtonian laws of how these things move through the air have been figured out for a while. Aerodynamically there are very few surprises with the current designs that we have. If we start experimenting with completely new designs, blended wing bodies, or new propulsion, we may end up with new surprises. We ended up with interesting surprises about 20 years ago when we introduced new automation systems that started flying the airplane and had different ideas about how to do that than the pilots sometimes had. And that led to fascinating divergences between what the pilots thought they told the airplane to do and what it then ended up doing.

Health care has a legitimate claim to saying that the diversity, the surprises, and the unknowns are larger than in other worlds. But let me say again that is not an excuse for not looking for islands of stability that we can improve in the same ways in which we've done in other safety critical worlds.

RW: How do you think about the concept of just culture?

SD: I have always found it very difficult to define a just culture. A just culture is a culture that balances accountability and safety. But what does that mean? It's more a catchy title than something that we can really operationalize. The problem with a just culture is that justice is of course one of these essentially contested categories of humanity. That is, what is just to someone is seen as deeply unjust to somebody else. However, currently in the debate, particularly in the US, just culture gets taken as a mechanism typically, a mechanism for holding employees accountable for their role in adverse events, but not in such a way as to squelch the flow of safety information.

RW: In some ways there's a tension even in your answers as to whether or not we're talking about an idealized version of justice. When I've talked about and thought about just culture I've never heard it framed that way—that we're actually talking about views of the term justice—versus something that's more empirical and pragmatic, that is, trying to somehow balance a system-oriented approach with the need to draw lines around acceptable and unacceptable behavior. And your thinking about it actually seems to be much more informed by almost Rawlsian principles of what is just and what is not.

SD: That is quite correct. Going back to what you suggested: that the current energy or motivation for hospital administrations to get their arms around a just culture for their institution seems to be driven by the need to draw a line around that which we find not acceptable versus that which we do find acceptable in terms of contributions to adverse events. We need to examine a couple of things when we make those claims. The first is that the fundamental problem and tension of creating the just culture is this: We want to hear everything that people have to say about their contribution to safety, but we can't accept everything that we hear. This creates an interesting tension because if we want to keep on hearing their contributions, we have to somehow not respond so punitively that we will shut up them and people who take an example from that.

Now you also mentioned that this is about perhaps redressing or looking at the balance between individual explanations for safety failures versus system-oriented explanations. I find it fascinating that the enthusiasm for just culture algorithms or mechanisms has grown on the back of the gradual acceptance of systems thinking. We have systems thinking on the one hand: If you want to understand why things go wrong you've got to look in the organization. A human error is not a personal problem; it's an organizational problem. That has been the orthodoxy in the field for at least the last 20 years. The issue is that this has shifted the liability concerns onto the organization. Now that inevitably leads to pushback. We need to find ways in which we can actually rebalance this attribution of responsibility for why things go wrong. This is where we have to be watchful. It can become easy for just culture algorithms to become an instrument of powerful interests in an institution to bend justice their way. Some data recently came out that suggested as much, which is a bit concerning. Terry von Thaden has done a very important study on the perception of justice in hospitals across the US from a different point of view in the medical competence hierarchy. One consistent finding is that those at the top of the hierarchy, doctors typically, find outcomes of the just culture mechanisms, typically just. Nurses find them typically unjust. So justice depends on where you sit in the hierarchy.

And this takes us back to the Republic and Plato: If justice is simply an instrument of the strong to get what they want or to keep what they want to hold onto, then it's not justice, then it's about strong-arming and governing. The problem obviously is that this tends of course to shut up sources of safety-critical information that you want to hear about from lower in the hierarchy. It also leads to situations in which power gets subverted. In multiple countries I've seen in health care systems that nurses now are able to threaten doctors by saying, "I'll report you if you don't do this the way I want you to do it." So a reporting system is then used by lower levels in the hierarchy to strong-arm higher levels in the hierarchy to get their way. Those are a few indications that the mechanisms that we think are innocent and neutral contributors to the administration of justice inside a hospital easily get subverted to either reaffirm power where it already lies or become an instrument to grab power where it isn't yet.

RW: I'm struggling with the practical implications of this. It sounds like the argument is in part that although an algorithm may seem bland and it's all laid out on paper, it has embedded in it power relationships and struggles and something that will feel very different for different people on the hierarchy and can be used in ways that are more subversive. Yet organizations need to move forward coming up with answers to complex questions around culpability and system versus person. So what should they do?

SD: That's a very good summary. This is what they should do: Sit back and think very, very carefully about who is going to be involved in drawing the line. The algorithms that we have at our disposal are in some sense the easy part. It's easy to draw up three sorts of categories and say you have this, this, and this, boom, boom, boom. Now stick behavior into those categories. But somebody has to do that categorizing; somebody has to take evidence from an adverse event and say it is this and not this. That is a human judgment, and that's the hard part. Nothing that we write and give to hospitals makes them ready for making that judgment. Nothing. We can say a category has certain features, but eventually at the end of the road, somebody has to make that judgment.

Now the critical question for justice and just culture is this: Who has the power to make that judgment? And who do they rely on? And I think this is where even the ancient Greeks may have gotten it wrong. Because not only who gets the power to make that judgment but who is represented there? Who else gets to have a say? The only way to assure a form of justice in the hospital is to make the basis for that judgment as deeply and broadly representative as possible. That is, not only have hospital administrators or some type of high level board make these calls but actually have input from the people who do the messy work every day, who know its details intimately, its goal conflicts and difficulties, and are able to comment on whether something was reasonable or not.

RW: As you were talking I was wondering, is this a jury of your peers and if so, as you pointed out the differences between the fairness of the system that doctors versus nurses perceive, should these be cross-disciplinary panels, or should they be largely representative of the people who are being accused?

SD: I would encourage not necessarily cross-disciplinary but cross-hierarchy. That is, if there is a particular decision to be made about a surgeon, I would certainly have an operating theater nurse in there. Having that perspective involved, that viewpoint for whom the world looks quite differently inside of an operating theater, who sees different things, who has perhaps a completely different overview of the social order of what happened in that operating room, is probably invaluable to contextualize the behavior that is now at issue.

RW: When you now see health care delivery institutions, hospitals and others, grappling with bad mistakes that they commit, what are the problems in the way they handle that that are the most striking to you?

SD: They range from too tough to too variable. I'll give you an example. You also probably know the case of Kimberly Hiatt up at Seattle Children's Hospital. She gave a calcium chloride overdose to one of her pediatric patients. The patient dies; in fact it later becomes very difficult to connect Kimberly's actions to the patient's death. She is escorted off the premises, dismissed, fired, and her license is revoked; charges are prepared against her. She gets her license back with arduous conditions that are extremely humiliating and impracticable, nobody wants to employ her. Seven months later she kills herself. Now that to me is deeply tragic and deeply unnecessary. In fact, I would like to make a claim (and I do in my latest book) that there is a very strong link between organizational resilience and the psychological resilience of the individual involved in an adverse event. The resilience of individuals involved, at least the ones that I have been involved with, seems to be a product much more of the community around them and the organizational support offered to them than it is a function of their individual strength of character. It's about community. It's about support and what the organization does. So they are resilient if the organization is ready to admit that we don't deliver perfect products the whole time. People commit errors here and we need to be able to deal with that. If an organization is ready to say that, they're also ready to commit to learning from these errors. They will likely have mechanisms in place to track where they come from and how they are dealt with operationally. So I think that the harsh responses to individuals involved in adverse events are unfruitful and unfair.

RW: It sounds like you feel like that happens more on the nursing side than on the doctor side. I don't want to put words in your mouth, but is it?

SD: If that might be statistically the case, I think that is in part a function of employment relationships—that nurses often are employees of a hospital and doctors are not. The doctors come to visit or they contract their services out. You can't fire them.

Every nurse I've talked to can point to all the system contributions and say all these systems problems contributed to this mistake. But they still feel guilty; they still feel responsible, independent of their own ability to show the system contributions. People in health care—precisely because they come to that field and are drawn to the field to help, to heal, to care, to cure—they feel personally responsible for not doing it well and in fact for harming someone, independent of their own understanding of the organizational contribution. Because somebody put their trust in them. Somebody said, "I trust you to do the right thing."

RW: Do you think that patients and the citizenry more broadly, the political apparatus, is willing to accept that as a message? How big a problem, in trying to calibrate the response, is it that the public often demands a pound of flesh, and it's difficult to say to them "this person is punishing him or herself enough, we don't need to do anything more"? Or that this case simply demonstrates that she was human and the main issue is making the system more robust.

SD: The latter is the desirable one. To say these are humans, things happen in our systems that are so complex, resource constrained, and goal conflicted that error is the necessary byproduct of trying to create as much success as we can for you guys. The issue though is that this is very hard. You're right in flagging that. Some parents who, for example, in a forceps delivery lost their baby, whose skull was crushed. The deep tragedy for the parents was of course very obvious. But this blew me away, they asked, "So what mechanisms does the hospital have in place to deal with the obstetrician who must be in pretty bad shape?" Nothing punitive, in fact inquiring, what will the hospital do for this doctor so that he or she is taken care of in a way that is humane and meaningful and helpful? And by extension what will the hospital do to learn from this thing and make structural arrangements so that it may not happen again? To make that story a public winner is very difficult. As long as we are on the sidelines and looking at stories in the media, it's easy to have one position on these things. If you're the patient or your loved one is the patient, it takes on a completely different nature. And it may get either easier for some people or typically harder for people to forgive error.

However, there is very strong research that shows that in that position it depends very much on how the hospital responds. From the acknowledgement of the importance, asking for apology, all of that is extraordinarily important in getting people to understand that error is the unintentional and undesirable byproduct of the pursuit of success. But, at the end of the day, I think putting monetary figures on it may help and say well, sure, we can treat medical error as something that needs to be punished away. But, there are consequences. We lose the flow of safety related information; we lose good practitioners. Whereas, if we have more restorative justice, take the person who was involved in this and have them meet the patient-victim and have restorative practices in the wake of such failures, then this is may result in freer flows of information about these safety critical things. This is what we can save ourselves in both human lives and money.

RW: I wrote something with Peter Pronovost a few years ago on balancing no blame and accountability and one of the things that prompted me to do that was the question of what do you do with people who don't follow reasonable safety standards? So what do you do with the surgeon who says I don't believe in the checklist, or the nurse or the doctor who "forgets" all the time to clean his or her hands?

SD: I will first commit to the approach that you and Peter took in that New England Journal of Medicine piece when I am completely, exhaustively convinced that all reasonable system explanations have been ruled out, and that may have been in part my skepticism with what you guys then wrote. From my personal experience also in aviation, I would certainly be able to say that some people are less fit for particular jobs than others. There are individual differences. That's uncontroversial. But to suggest that we need personal punitive interventions to get people to conform to a particular practice is truly something of a last resort. The reason for keeping it for the absolute last resort is that it becomes really easy and in some sense an abdication of responsibility by those who run the system (hospital administrators or managers) to say, "Listen I did everything I could. I've reminded them of the rules. I've put the washbasins where they are, now they just have to comply." Whereas there may still be all kinds of system issues that have not been cleaned up yet in terms of workload allocation, in terms of other ergonomic things related to, for example, the coloring of the soap versus the disinfectant, all kinds of things that could still be looked at. It becomes too easy for a manager to rely on a punitive personal intervention, and say, "Well in that case, I don't have responsibility for making sure the system provides everything it should provide for people to do this." I first want to be persuaded 100% that all system interventions have been exhaustively run through the system. Only then, if there's still a residue, I'm happy to enter in that conversation.

RW: But you don't worry about the other direction—that it's too easy for the worker to say I didn't like the color of the soap? That's a bit hyperbolic, but essentially the idea that it's okay for me to choose not to follow a safety standard because I don't think the system has been made perfect. And in order for the system to be made perfect, it might take an unreasonable expenditure of institutional resources or time or money. It may also play into particularly the physician narrative that I'm an autonomous decision-maker and I can choose to opt out because I don't like all the circumstances.

SD: If we want to meaningfully, as a community, attack a problem like that weeding out and punishing individual physicians who are displaying that sort of attitude, sure it's one way that some people will believe is fruitful. I am too much of a systems thinker to believe that that is the only or most productive way. I would say, let's look at medical education. How do we start talking to these people? Where does this sense of autonomy, arrogance, and entitlement get bred in? "You're entitled to your color of soap, otherwise you don't need to wash your hands." What nonsense, right? But where is our responsibility as trainers at institutions, as educators, as colleagues for allowing that sort of position on that issue to flourish and become legitimate? Yes, we can swat the individual mosquitoes, but I'd rather turn the attention to the swamp that creates it.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources