Sorry, you need to enable JavaScript to visit this website.
Skip to main content
Tim Vogus headshot

In Conversation with Timothy Vogus about High Reliability Organization (HRO) Principles and Patient Safety

Timothy Vogus, PhD; Merton Lee, PharmD, PhD; Sarah E. Mossburg, RN, PhD | February 26, 2025 
View more articles from the same authors.

Vogus T, Lee M, Mossburg SE. In Conversation with Timothy Vogus about High Reliability Organization (HRO) Principles and Patient Safety. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2025.

Save
Print Download PDF
Cite
Citation

Vogus T, Lee M, Mossburg SE. In Conversation with Timothy Vogus about High Reliability Organization (HRO) Principles and Patient Safety. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2025.

Editor’s note: Timothy Vogus is the Brownlee O. Currey, Jr., Professor of Management at Vanderbilt University’s Owen Graduate School of Management. He is also a founding and continuing member of the Blue Ribbon Panel that developed Leapfrog Group's Hospital Safety Score.

Sarah Mossburg: Welcome, Dr. Vogus. Tell us a little bit about yourself and your current role.

Tim Vogus: Thanks for having me. I received my PhD in 2004 from the University of Michigan, where Kathleen Sutcliffe was one of my professors and Karl Weick was a member of my dissertation committee. They  are foundational figures in the world of high reliability organizations (HRO), and I have been doing research in this field ever since. I also research some of the pressing problems around patient experience, patient-centered care, burnout and other healthcare workforce issues. I am currently the Brownlee O. Currey, Jr Professor of Management at Vanderbilt University’s Owen Graduate School of Management. I have been at Vanderbilt for the entirety of my academic career. I am also the Deputy Director of the Frist Center for Autism and Innovation. I am very interested in issues of neurodiversity in the workplace and making workplaces more inclusive: How do we get the broadest range of voices included in thinking about issues related to patient safety?

Sarah Mossburg: AHRQ has published a primer on high reliability, which is on the Patient Safety Net website. It gives an overview of high reliability in health care. For the purposes of this discussion, please describe how high reliability organization principles have contributed to patient safety in your organization or the organizations you have worked with.

Tim Vogus: AHRQ’s primer is a very good, compact orientation to the field; it does a nice job of summarizing the key principles. In terms of organizations I work with, one of the most important things that a reliability-centered orientation does is liberate people from thinking about safety. That might seem like a weird thing to want to do, but the word “reliability” carries less baggage than “safety” in terms of blame and making mistakes. When I talk to people about reliability, it gets them to lean in more because reliability is something that is out there ­— it is not a mistake I personally am making, it is the system that is more or less reliable. So, it gets me thinking about the things we are doing together. It also gets me thinking about the things that might be nuisances in my everyday work. What are the things that are unreliable? What are the things that I must work around? Which things get in the way and are frustrating?

One of the most important ways in which a high reliability orientation can help is by helping people be systematic and intentional about things that have gone wrong and that we’ve become used to. By asking, “How can we improve the reliability of the system,” we invite ourselves to think more intentionally and anticipate things that might go wrong. And then, when things do go wrong, how do we put a process in place to adapt and contain them?

In terms of organizations I work with, I’ll provide a specific example from a cardiac catheterization lab that was unreliable on an everyday issue, getting patient histories and physicals before catheterization. Even if they miss only 10% of the time, that is a lot of patients for whom they do not have all the details. So, what did they do to improve? They applied some high reliability principles. They asked, “What do we need to look out for? What do we need to keep in mind? Why is this happening?” They asked those questions systematically. And then they said “When we do miss, let us make sure we huddle right away. Let us talk about what went wrong. Who did not get listened to in our team?” Because often, people were actually saying, “we need to get the patient’s history first.” But they just were not being listened to. So there was a lack of deference to expertise like we talk about in high reliability. After this intentional approach to using high reliability principles and a set of related questions and being systematic about attention to detail, internal quality improvement data showed improvements in obtaining histories and physicals. 

This example is instructive about high reliability, tailored to solve a specific problem. Doing this one thing, optimizing this one outcome. That might be a little bit frustrating because it is so particular. The whole system was not transformed in one swoop. But that is the promise and the frustration of high reliability. It is the hard work that gets done every day, and we need to think intentionally about the everyday processes, the mundane stuff, and being really disciplined about ordinary work. It is not waving a wand for a magic cure-all. But I would say overall there is consistent evidence that these high reliability approaches are associated with things like fewer medication errors, fewer patient falls, and increasingly associated with other kinds of outcomes like patient satisfaction and experience and fewer complaints. 

Sarah Mossburg: High reliability organizing is often described as a mindset, with five characteristics: preoccupation with failure, reluctance to simplify interpretations, sensitivity to operations, deference to expertise and commitment to resilience. Some of your work suggests that those principles are sometimes implemented unevenly. Are there patient safety contexts in which one principle or another is more critical?

Tim Vogus: It is important to think about high reliability as a mindset. But sometimes when I describe it that way, I realize that people are not necessarily thinking about “mindset” in the same way I am. Mindset is a cognitive orientation constituted through how we go about talking to each other. Karl Weick and Kathie Sutcliffe note that talk helps us capture more discriminatory detail, more nuance, more perspective, more substance about, and even recaptures what is unfolding in front of us. The way that mindset gets enacted through talk involves basic, everyday personal practices like active listening, being more open, sharing, elaborating, building on information that is being shared, and expressing some vulnerability. Psychological safety is a close cousin of some of what we are doing in the high reliability world. 

As for which processes might be more important than others, in general, healthcare organizations tend to be better on the two sub-components of the high reliability principles that we describe as containment. Something has gone wrong, and the organization needs to recover, so commitment to resilience and deference to expertise come into play. Catherine Klein and her colleagues have a great paper on what they call dynamic delegation, when teams pass leadership responsibilities when performing urgent, unpredictable tasks.1 Trauma teams are good at reconfiguring and adapting in response to those kinds of circumstances. Same thing with resilience; people need to learn as they go in chaotic situations. Something has gone awry. How do we recover? I think healthcare teams can be quite good at that. 

Less often, I see the anticipatory capabilities: preoccupation with failure, sensitivity to operations and reluctance to simplify interpretations. In my survey work, I find that reluctance to simplify interpretations tends to be rated lower than the other four HRO principles. That may be because it is tough to keep multiple ideas active simultaneously, especially for healthcare providers and teams because they are often trained to rule things out, a key skill in medical decision-making. There is also substantial pressure on care providers to do things quickly and efficiently. Keeping more ideas in play might seem at loggerheads with that. If I were to recommend where healthcare organizations emphasize a bit more, it would be in developing more of those anticipatory capabilities. 

Sarah Mossburg: Some of your research discusses how organizational mindsets are maintained. Are they durable? Do they come and go? What role might staff or leadership play? What are some of the ways that high reliability mindsets are best sustained in healthcare organizations?

Tim Vogus: This is a critical question. High reliability principles and safety culture are too often treated as things that can be accomplished and then crossed off the list or treated as once-and-done projects, which is not the right way to think about this. High reliability and safety culture are fragile, and they need to be re-accomplished. A great example of fragility is work comes from Peter Madsen and colleagues who studied a pediatric intensive care unit that became highly reliable and then fell apart. And the source of both becoming highly reliable and falling apart was leadership. A new leader, who was educated on the principles of high reliability, came to the organization and started right away with a clear plan. They began interdisciplinary rounding. They intentionally deferred to the expertise of the respiratory therapists in all things breathing, whether the staff member was a hospitalist (and of higher status) or not. Those are things that on an everyday basis happened because there was the drumbeat of the leader emphasizing, modeling, and reinforcing high reliability principles. And these changes led this unit to being highly reliable after having not been. 

However, here is the fragility part. When that leader left, the culture reverted completely back to a much more traditional approach where everybody stays in their lane, stays in their role, and the members of the unit are not talking as much. A big part of the high reliability mindset is the nature of the talk, the openness, the vulnerability, and the active listening. With the departure of that leader, all that went away, and the unit went right back to what they were before, an underperforming pediatric intensive care unit. So that is some of the fragility there, and the importance of leadership, of setting the tone, instilling practices, normalizing the right kinds of conversations, infusing the right kind of language which are consistent with the principles of high reliability organizations. 

Some of the writing that I have done with Brian Hilligoss on this has been about habits--how do you make these complex ideas about preoccupation with failure, reluctance to simplify interpretations, sensitivity to operations, commitment to resilience, and deference to expertise part of your everyday operations? Well, you spend time thinking about what we can embed in a mundane way and think of it as a checklist, a set of questions you ask whenever you’re thinking about something you might change or improve. Are we making sure we are thinking preemptively about what could go wrong? Are we spending time considering alternate assumptions? Do we have a sense of how this function plugs into the upstream and downstream workflow? Do we make sure we have the right people in the room at the right time? Are we listening to all the perspectives that might be affected? Do we have ways to check if we are going off the rails built into the process? 

Sarah Mossburg: So, leaders are critical, but also building these ideas into specific structures and processes in order to create the habits.

Tim Vogus: Leaders are critical, and leaders change. I would ask, is an organization really a highly reliable organization if it is dependent on any single leader? The answer is no, because if it is that fragile, high reliability principles have not been embedded in everyday work. Instead, the organization is relying on the leader to apply a consistent level of force, which is an unrealistic expectation. Not every organization is going to have that kind of leader. A good leader makes the change happen and then uses infrastructure to sustain the change and make it stick.

Sarah Mossburg: You have noted that while healthcare systems have improved patient safety outcomes, they have not achieved full, highly reliable performance, which could be due to limitations of the evidence. Could you tell us more about the evidence that could support higher reliability organizing and improved patient safety outcomes if there were no limitations?

Tim Vogus: We lack a clear model and set of interventional tools for taking an organization or entity from not reliable to highly reliable. In my view, this is a fundamental failing in the literature. During the earliest days of high reliability research, aircraft carrier flight decks, nuclear power control rooms, and air traffic control towers were already highly reliable. It was not clear what got them there. And some of the work I have done with Dawn Iacobucci and others has looked at the antecedents of these high reliability principles. We have examined what we refer to as reliability-enhancing work practices that shape the selection and training of people (emphasizing the importance of the interpersonal as well as the technical) and designing work to best leverage expertise. What we lack that practitioners need and want are specific interventions that take an organization from not reliable to highly reliable. Heather Gilmartin started to do some work in this domain while thinking about toolkits.2

I have also worked with a group of researchers in Canada, led by Leahora Rotteau. We looked at implementations of high reliability in a system that did not go great, but in not going great, revealed a big problem: when people assemble a pastiche of existing safety tools and say, well, if we slap all these together, we will get high reliability.3 I am oversimplifying, but I think some of the bundles of safety tools are just combining things that already exist and work well, like rounding or huddles. So let us just do them all together, and that will make us highly reliable. I am not sure that is quite right, because each of those interventions is not deployed in a precise way to elicit the specific mindset, talk, and behaviors that exemplify high reliability . But I and other researchers need access. We need to experiment and try different types of interventions, which is a big barrier. We need organizations that are willing to experiment systematically so we actually can do controlled, randomized trials and learn what works.

Large sample high reliability surveys are another area where patient safety could be advanced. Kathie Sutcliffe and I developed a measure that has been deployed in various places. But most organizations do not routinely collect data about high reliability principles. Data on high reliability principles requires scale because it is a collective concept. We are not measuring individuals perceptions; we cannot just survey a wide array of physicians. The dependency of high reliability principles on scale arises because the shared perceptions of the extent to which a team or unit in a hospital is preoccupied with failure, for example, is important to assessing whether high reliability principles are truly present in practice. You need the people who are interacting with each other to characterize the extent to which these practices and processes are in place. So that has been a barrier. In terms of outcome measures, many of the studies I have done rely on a health system being willing to share safety data. And that can be difficult, especially studying it at a team or unit level. There are publicly available data related to safety, but those are at a hospital level. And there is a gap between the level of measurement and the level of those kinds of outcomes, which makes it harder to be precise. You need really deep access at organizations. 

So, those are some of the big barriers. We need better examples of that journey to highly reliable, better interventions. And there may be opportunities for that because the implementation science community has been growing rapidly. I work with some people in that domain and see that they are focused on readiness for change. There are many different models of change and implementation, and you can think about high reliability in those terms. 

Sarah Mossburg: High reliability was first described in industries outside health care, as you mentioned earlier. I am curious. How have high reliability principles evolved as they have been applied in healthcare settings?

Tim Vogus: In a lot of ways they have, including in everyday human resource management practices. What I found in some of my work was that hiring could influence reliability.4 For what tasks are we hiring people? What factors are we considering when we select people? Are we interested just in their technical skills or also in their ability to work collaboratively? And are we training people consistently with high reliability principles? Are we designing work to make it easier for people to draw on their relevant expertise and be able to contribute to others’ work? We have learned that there might be underexplored sources of reliability. I’m thinking of things like everyday human resource practices of hiring, training, socialization, and performance management. Can we hire in a way that reinforces high reliability principles rather than just trying to differentiate people.

One thing that came out of COVID was a pretty consistent finding that safety culture eroded. Amy Wilson, Kelly Randall, Mary Sitterding, and I have explored the question, “Who is contributing to highly reliable performance?” COVID was a useful test case because all the “non-essential” people were sent home—the chaplains, the social workers, and the family members.5 We think they were underappreciated components of the safety system. They might get different information and information that could otherwise be missed. They might be able to get those weak signals like if I see my loved one, they just seem a little off today, and I communicate that, that seeming a bit off might get picked up by a chaplain or family member, when it might otherwise be missed entirely.

Sarah Mossburg: What are some innovative ways that high reliability principles are implemented in health care? What would you say works well?

Tim Vogus: I am not going to give big, sweeping pronouncements here because of the limitations we talked about earlier, but I do think the best organizations are the ones that embed the language and processes that support high reliability in routines and procedures. I think you can reappraise something like checklists as a way of embedding high reliability principles into everyday practice. 

I recently learned of a nursing unit that has embedded sensitivity to operations in an interesting way, by briefing members of the unit as a group. At the start of a shift, instead of doing handoffs about individual patients, nurses do an overall briefing of all the patients currently on the unit. Here is who has what, and here is somebody who might need help. It gives a view of where the workload is and who might have the most vulnerable patients. That is an innovative but simple way to increase sensitivity to operations. 

Sarah Mossburg: Are there implementations that miss important features of high reliability?

Tim Vogus: I think one mistake some organizations make is to trot out a set of safety-related tools and, then not think intentionally during implementation about how to foster high reliability. When you do that, people just glom onto the language of high reliability in a way that is not exactly as the principles are intended. We talk about how commitment to resilience is intended to be a collective process, such as how do we respond to things that have gone wrong? How do we learn from them? We found that when a particular organization deployed these general safety tools, people talked about commitment to resilience as whether they were individually resilient or not or were overwhelmed at work. Thinking of resilience only as an individual factor shows a fundamental misunderstanding, a failure to think thoroughly enough about the complex, organizational determinants that condition the problems we want to solve. Instead of saying “we are just going to help individuals become personally resilience and then our organization will scale up to being a highly reliable hospital,” we have to work on how we are going to marshal tools in a high reliability-oriented way to solve problems to create organizational resilience. 

Similarly, we don’t want consultants coming in and just saying that a hospital or system is highly reliable, especially when in daily function nothing is different except we are using different words. High reliability is not just language without substance, and that is why I’ve put so much emphasis on the question of how do you link high reliability to the everyday routine practice. High reliability needs to happen on the frontlines every day. 

Sarah Mossburg: Along those lines, many organizations describe themselves as highly reliable. I am wondering how organizations can validate that assertion.

Tim Vogus: There are a few ways to think about this. One is upstream thinking; if you are catching things earlier, more near misses and reporting more errors, that indicates a safer culture, showing preoccupation with failure. If you are surfacing things earlier when you can actually correct them or improve the system so they do not happen, then you are manifesting a more highly reliable organization. 

Engaging with some of the process measures and surveys can also validate high reliability. The safety organizing scale that Kathie Sutcliffe and I developed and validated, which reflects high reliability principles, can help assess what organizations are doing in practice, how people are talking with each other, and their shared experience of those interactions.6 I also consider tracking outcomes. Are we sustaining error-free, harm-free performance over a longer time horizon? That is another way to think about it.

Sarah Mossburg: What are some of the future priorities for high reliability organizations and patient safety? 

Tim Vogus: I think some of the important things to be thinking about are about the workforce delivering and sustaining highly reliable performance or patient safety. High reliability organization principles are potentially useful as a burnout intervention or to help people recover from moral distress. Studies we have done in recent years look at how higher levels of high reliability principles are especially helpful for these workforce issues. If a nursing unit has encountered higher levels of adverse events, and these adverse events spurred a change that embraced high reliability organization principles, adopting those principles can help reduce burnout because you feel like you have a pathway forward by using these principles. People feel like their organization is going to help everyone move forward, which can help restore faith in their work and their organization. 

Sarah Mossburg: Thank you so much for taking the time to talk to us today.

Tim Vogus: My pleasure. It was great talking with you.

References

  1. Klein K J, Ziegert JC, Knight AP, Xiao Y. Dynamic delegation: shared, hierarchical, and deindividualized leadership in extreme action teams. Admin Sci Quart. 2006;51(4):590-621.
  2. Gilmartin HM, Connelly B, Hess E, et al. Developing a relational playbook for cardiology teams to cultivate supportive learning environments, enhance clinician well‐being, and veteran care. Learning Health Syst, 2024;8(2):e10383. 
  3. Rotteau L, Goldman J, Shojania KG, et al. Striving for high reliability in healthcare: a qualitative study of the implementation of a hospital safety programme. BMJ Qual Saf. 2022;31(12), 867-877. 
  4. Vogus TJ, Iacobucci D. Creating highly reliable health care: how reliability-enhancing work practices affect patient safety in hospitals. ILR Rev. 2016;69(4):911-938.
  5. Vogus TJ, Wilson AD, Randall K, Sitterding MC. We’re all in this together: how COVID-19 revealed the co-construction of mindful organising and organisational reliability. BMJ Qual Saf. 2022;31(3):230-233. 
  6. Vogus TJ, Sutcliffe KM. The Safety Organizing Scale: development and validation of a behavioral measure of safety culture in hospital nursing units. Med Care. 2007;45(1):46-54. doi:10.1097/01.mlr.0000244635.61178.7a
This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print Download PDF
Cite
Citation

Vogus T, Lee M, Mossburg SE. In Conversation with Timothy Vogus about High Reliability Organization (HRO) Principles and Patient Safety. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2025.