• Perspectives on Safety
  • Published December 2006

In Conversation with...J. Bryan Sexton, PhD, MA

Interview


Editor's Note: J. Bryan Sexton, PhD, MA, is Assistant Professor, Department of Anesthesiology and Critical Care Medicine, at the Johns Hopkins University School of Medicine. Trained as a social psychologist, he has become one of the world's foremost authorities on the role of culture in patient safety. He developed the widely used Safety Attitudes Questionnaire and is one of the lead investigators of the Michigan Keystone ICU project, which aims to change practice and culture in intensive care units (ICUs) throughout the state. His research examines the connections between attitudes, behaviors, and outcomes in high-risk team environments, particularly aviation and medicine. We asked him to speak with us about safety climate surveys and efforts to change safety culture.

Dr. Robert Wachter, Editor, AHRQ WebM&M: Define what you mean by the safety climate or safety culture.

Dr. Bryan Sexton: I like to keep it simple. The lay definition of culture is "the way we do things around here." That is, the norms, the practices, the values, and the beliefs of the people working in a given unit, department, hospital, or hospital system. There are components of that culture that are specific, such as the reporting climate or the teamwork climate. Although there are a lot of correlations between different components of culture within a given patient-care area, you can have a work unit, let's say a pharmacy, where there's great collaboration between the pharmacists and the pharmacy techs, but the working conditions are horrible. Or the emphasis on evidence-based medicine might be very poor. So there are multiple dimensions of culture, and you can be very strong, very weak, or mediocre on any of them.

RW: Part of your early research was in aviation, and many people look to the cultural changes in commercial aviation as models for health care. What are the fundamental differences between the tasks and the people and their jobs in these two fields? Where do you think those analogies really don't hold?

BS: If you look inside a cockpit, you have a first officer and a captain, and, in some older aircraft, also a flight engineer. These people come from a relatively similar background. They're trained to do similar things. First officers are in training to become captains. An operating room has perhaps 15 different types of caregivers: anesthesiologists, OR nurses, surgeons, CRNAs, techs, orderlies, support staff, etc., as well as all the flavors of residents, fellows, and attendings. These people interact with each other to share important information, but they're different genders, from different national cultures, they're different ages, they have wildly different backgrounds and training. That's just not the case in the cockpit. In the cockpit, for the most part, you've got a relatively similar group of people in terms of gender, background, and expertise. So when you compare an operating room to a cockpit, you come to an abrupt halt in the analogy when you start looking at how to tackle the problem. Because you can run a captain and a first officer through the same type of training. Physicians and nurses and anesthesiologists and surgeons need their training tweaked so that it's relevant and interpretable to them. Physicians in general are trained to communicate in bullet points, just the headlines. And nurses are trained to communicate in terms of the story or the narrative of the patients that they're with. Now when you add on top of that these other differences like age, national culture, years of experience, and confidence and personality, all these other variables come into play. So I think that we have to be very careful in using the aviation analogy. It's a great way to pique the interest of someone who hasn't yet heard it. But I think it is threatening to become a tired analogy.

RW: One of the main tenets of the effort to improve safety culture—partly drawn from aviation—has been to "dampen the hierarchies." Yet, in the frenzy of a busy operating room, there undoubtedly needs to be some hierarchy, some leadership. How do you balance that tension between the need to flatten the organizational chart while recognizing that complex situations need leadership, and that people have different degrees of expertise and leadership ability?

BS: I think flat hierarchies are generally a bad idea. Health care is becoming more specialized all the time, so the information that a clinician needs—take a surgeon for example—at any given moment might be in the heads of maybe a dozen different people. We continue to have this old world mentality where we expect one person to have all the answers. What we don't do is provide everyone with the training they need to best utilize information that's available to them. We have to facilitate the efforts of very busy clinicians to better utilize independent and diverse inputs so that they can make good decisions. This is not to say that every time we have to make a decision we have to call a committee meeting and take a vote. Rather, it's how do you build in opportunities in the OR to take 90 to 120 seconds just before skin incision for the surgeon to set the stage by saying, "All right, does everyone know each other? Is everyone familiar with the procedure? Well, here's the operative plan and here's plan B. And if anything looks off please let me know, and I'll do the same for you." By doing that before every procedure, you build an expectation in all the workers' minds that they will have the opportunity to raise any of their concerns [Surgeon Marty Makary, MD, MPH, is leading these efforts at Johns Hopkins].

RW: In the patient satisfaction survey world, the question, "would you recommend this hospital"? is often seen as a good predictor of the overall patient experience. Is there one safety culture, or climate, question that captures all of these individual variables?

BS: If only it were that easy. The Safety Attitudes Questionnaire has six distinct dimensions, but within the safety climate dimension, certain items are consistently better predictors of clinical and operational outcomes. Let me give you an example. We would define a safety climate as the extent to which frontline caregivers assess a genuine and proactive commitment to patient safety in a given patient care area. So one of the items in the safety climate scale is, "I am encouraged by my colleagues to report any patient safety concerns I may have." We find a strong correlation with scores on this particular item and important outcomes like ventilator-associated pneumonia rates, bloodstream infection rates, and length of stay. We just finished a study in Michigan and showed that if you can get caregivers within an ICU to improve on that one item, it pulls up the rest of the unit. In other words, if you can increase the extent to which individual caregivers feel encouraged to report their patient safety concerns in that unit, that moves everything: trust, buy-in, and engagement. That's very powerful. And with further analysis, you see it's all about the interpersonal dynamics related to patient safety—to the extent that you feel encouraged by your colleagues, you are going to step up to the plate and go the extra mile. On the other hand, if you feel like you're going to be ignored or labeled as a problem or seen as a complainer, you won't see frequent flyers at the "suggestion" box.

RW: Interesting, because it sounds like that item is not necessarily a proxy for simple willingness to report or enthusiasm about reporting. It's capturing something much broader about the general ethos of the unit.

BS: That's right. The unit really is like a network of caregivers. And they all answer the question for themselves: Are you going to feel supported socially and emotionally if you come forward and surface these issues with the expectation that they'll be addressed? If someone surfaces an issue and sees it not addressed, it doesn't take very many experiences like that before you just stop surfacing issues.

RW: Is safety climate an institutional phenomenon or a unit phenomenon?

BS: Well, I would go so far as to say for the most part there's just no such thing as institutional safety climate. Do you see variability between hospitals? Certainly, but the average variability that you see within an institution outpaces the variability that you see between institutions. For example, you could have a great ICU, but walk 20 feet down the hall to another unit and it's a miserable place. It's like politics—all culture is local. A frontline nurse in Unit X doesn't really care what the hospital says, so much as how his or her actions are going to be viewed within the context of that specific unit and that specific situation. There is so much variability within institutions that I look forward to the day when we can say that there really is such a thing as a hospital-level culture that we need to focus on. But right now when individual hospitals ask us to provide a hospital-level score, we give it to them by looking at the percent of their individual patient care areas or nursing units that are meeting a goal, or meeting a cultural benchmark or threshold. So for the hospital with 5000 caregivers, the denominator is not going to be 5000, it's going to be the 120 patient care areas in that hospital. This is why having valid and representative data for each patient care area is so important to cultural assessment, and our survey garnered overall response rates of over 80% for the past 2 calendar years.

RW: Let's say I'm a hospital CEO and I'm trying to improve the safety culture in my institution, and you tell me that the culture is terrific in this particular unit, but 5 feet away it's terrible. I'll want to know the pathophysiology of those differences. And what do they mean for my efforts to try to improve the overall climate within the institution?

BS: Well, people expect that hospitals will look like other organizations, so they try to take traditional organizational research lenses and apply them to hospitals. But hospitals are really like corporations made up of a whole bunch of different organizations, each with different outcomes of interest and styles of practice. So, you might be able to treat unit "5 South" as an organization, but not an entire large hospital. This makes hospitals bizarre places to do climate research. It also means that, despite the persistent search for cookie cutter interventions that you can unfold across a hospital—that sure sounds great, one pill for everybody—I just haven't seen the evidence that this is a legitimate strategy.

Our assessment method is that you measure things hospital wide, but capture information at the patient care area level. This lets you look at the strengths and weaknesses within the institution, but also to identify units that are doing very well, and those that are doing poorly. Then you can focus your efforts, your executives' attention, and your patient safety resources on the units that need the help the most, rather than try to unroll an intervention throughout the hospital.

Let me give you an example. I think executive walkrounds are a fantastic idea. As we've studied them in practice, we found a significant matriculation, if you will, of executives, towards those units that are doing quite well, and away from the units that need the help the most. The reason is often that they don't have a roadmap that tells them where to focus their time. So how do you overlay a widely used intervention like walkrounds on top of this idea of culture? Well, when you have captured culture at the patient care area level, you can first target those who are doing the worst, show them that you've heard their calls for help, and that you're addressing them. The folks who do feel comfortable talking about patient harm, who have physician champions, and who are already filling out incident reports don't need as much structure and coaching.

RW: Talk a little bit about what you've done in Michigan and what you're finding?

BS: As an aside, I would give the example of my wife's yoga instructor, who begins yoga sessions by asking, "Does everyone know where they are, because we can only begin when you know where you are." Such an elegantly simple concept, but how often does a given nursing unit actually pause and focus on their context of care delivery—the good, bad, and ugly? This is the role of safety culture assessment in quality, as it allows you to put your finger on the pulse of your current norms, and to monitor those safety norms over time. After looking into the culture mirror, the success of a project like we did in Michigan hinges on the extent to which you can provide frontline caregivers with better structures and packaged interventions that are easy to roll out, so that they will do it with enthusiasm and with methodological rigor. We began our work with all the ICUs in Michigan with a baseline culture assessment, which might demonstrate that the nurses and respiratory therapists in a given unit really hated each other. Well, that's important to understand before you try to adopt a cookie cutter ventilator bundle and simply assume that it will roll out perfectly well in every unit. By doing our survey and analysis first, we pause, focus, and identify underlying cultural issues that need to be dealt with before implementing innovation. Most innovations fail, not because they're bad innovations but because they're implemented poorly.

In Michigan, we do coaching calls with the participating units to identify barriers to rolling out the intervention and provide them with suggestions. We ask, for example, are you having any problems or barriers at the executive level, or with the physician leadership or the nursing leadership? Through these discussions, the institutions and units help each other just as much as we help them. And we've provided these ICUs with a venue to work through these important interventions but to do it with methodological rigor in terms of measuring processes, outcomes, and culture. So we're not making as many assumptions about what gets done and what doesn't get done. And by doing all this, we have found that we were able to reduce bloodstream infections by 80%, and sustain that improvement for the entire state—it has been over 2 years now!

RW: All right, let's say I've done your survey and it tells me that, in fact, the nurses and respiratory therapists don't get along, and I worry that this may get in the way of implementing improvements in my ICU. You're my consultant. What do you tell me to do in order to fix things?

BS: That's a great question. The honest answer is we're still trying to figure out why culture is resonating so much with frontline caregivers. When you take their results in aggregate and put it into a chart and then point at the depersonalized chart, it's no longer Mary or Joe complaining. It's what the unit said about how we're doing. Here's where we're strong, here's where we're weak. Now given that we're so weak on this one dimension, let's have the conversation and understand, in an interdisciplinary way, how we can take this QI or patient safety intervention and apply it in a way that works.

Let me give you another example along those lines. We were surprised to find out, we weren't looking for this, that faith-based hospitals [such as Catholic hospitals] improved more, and more quickly, than their larger academic teaching center counterparts. Here's what we've learned. When large academic teaching centers adopt an intervention, they monkey with it. They have researchers who put on their "publish or perish" research hats and say, "We're going to add this variable, and oh, we don't need to ask that variable." Whereas you don't see the faith-based sites taking a tool and dissecting it and putting it through eight revisions before they implement it. They take it and they say, is this what the research tells us we're supposed to do? Okay, and then they do it. And with that, we see their safety climate improve rapidly. There are certainly a couple of other advantages that the faith-based sites have. For one, generally speaking, they often don't have the additional burden of a teaching mission. The faith-based sites have long been committed to patient-centered care. In academic teaching centers, the teaching mission creates a lot of turnover in terms of direct providers of care. Also faith-based sites are generally smaller, it's often easier for them to take an intervention, find a champion for the intervention, and roll it out. It's often harder to find such people in large academic centers.

RW: How do you make the argument for the resources necessary to do the hard work of changing culture?

BS: Changing culture in and of itself is not the goal—rather, it is creating the context in which safe and effective care is reliably delivered. Let's take bloodstream infections as an example. If you look at the culture in that statewide sample of ICUs and specifically how comfortable the frontline nurses are in speaking up if they perceive a problem. Nurses who feel comfortable speaking up will say, "Excuse me, doctor, did you wash your hands?" before the doctors insert a catheter or a central line. Now that isn't a pharmacological link or even a treatment-related link between what takes place at the bedside and the clinical outcome. That is an environmental link. And to the extent that we start to acknowledge that there are environmental issues that we can understand and improve, that's a very strong argument for the additional resources you need to get infection rates reined in.

We've essentially eliminated bloodstream infections in Michigan'the median bloodstream infection rate for the entire state is now zero! You have to tackle the clinical issues?give people better clinical ways of doing things, and make it easier to deliver evidence-based medicine. But you also need to create the environment that encourages people to participate more and to feel more a part of the care delivery process. So the link between culture and outcomes is real.

RW: How do you relate this back to a policy level?

BS: Keep it evidence based, period. A couple of months ago there was a story in the Wall Street Journal about SBAR, a standardized way of doing briefings, where you provide the Situation and the Background and your Assessment and then your Recommendation of what to do. I'm a great fan of structured communications, don't get me wrong. But the story of SBAR is a typical example of how we overreact to the potential impact of something, just like we've done with rapid response teams, medication reconciliation, or operating room time outs. We mandate them before we fully understand them, and we don't do a good job of helping folks implement those interventions. The problem is that we're not studying the components of what makes structured communication powerful. And so maybe we're wasting caregiver time. How do we know that if you train a nurse to use SBAR in one situation that it's the appropriate way to structure communication in other situations? The IHI awareness machine is out there doing the hard work of putting this kind of intervention on everyone's radar screens. But we all need to do a better job of treating patient safety as a science, looking at the individual components that have the greatest potential, and demonstrating that they actually do impact the bottom line for safety. Policy changes need to be contingent upon evidence, not just enthusiasm and an awareness of the problem. The more there are these evidentiary links, the better opportunity that this actually will have staying power.



Previous interviews can be heard by subscribing to the Podcast

Subscribe to the Perspectives on Safety Podcast now



View Related Perspective

Back to Top