Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation with...Albert Wu, MD, MPH

July 1, 2008 
Save
Print

Editor's note: Albert Wu, MD, MPH, is Professor of Health Policy and Management at the Johns Hopkins School of Public Health and is presently working with the World Health Organization's World Alliance for Patient Safety, based in Geneva. He is a leading expert on several aspects of patient safety, including disclosure and evaluation. He recently wrote a commentary on the use of root cause analysis in patient safety in the Journal of the American Medical Association (JAMA).

Dr. Robert Wachter, Editor, AHRQ WebM&M: What got you interested in the topic of root cause analysis?

Dr. Albert Wu: Traveling around the country, I have been struck by the number of people who spontaneously volunteer comments about root cause analysis, comments that implied that they had, at best, mixed feelings about the effectiveness and efficiency of root cause analysis as a tool for helping them improve their institutions. They began their comments by sighing or complaining, griping that they were in the midst of conducting a root cause analysis and it was consuming all of their time, and they were not sure in the end what the result was going to be—whether it was going to actually help their institution to improve quality or safety.

RW: So as you then began to research the literature, what were your main findings?

AW: Although we are living in an era of evidence-based medicine, root cause analysis was widely adopted by the medical community in the 1990s without the benefit of much evidence. Every institution now conducts root cause analysis. Thousands of health care workers devote many hours to conducting these analyses, yet root cause analysis has never really been evaluated.

RW: Is there any evidence that it's effective in other industries outside of health care?

AW: Root cause analysis has been quite effective in nonmedical industry, including some heavy industries and aviation. It was more or less adopted because it had good face validity. Jim Bagian, who has a background in aviation, was one of the first to suggest that the VA [Department of Veterans Affairs] try out this approach to investigating their problems. One of the reasons that it was taken up so readily was that it made sense: you look systematically at an incident, try to figure out what the root causes are, and then try to find solutions. It's relatively easy to find problems or the causes of problems, but good corrective actions are much harder to find.

RW: Speak philosophically about the role of evidence, because some people would look at your argument in the JAMA paper and say that arguing for evidence for root cause analysis is unrealistic. As we've come to understand that many medical errors do relate to systems problems, isn't face validity for a method whose goal is to unearth all those systems problems and then create a plan to fix them so high that it is reasonable to ask why you should need evidence for this? It's not like a new drug.

AW: Well, I think that evidence-based medicine can be overdone. It's common knowledge that we have evidence for a relatively small fraction of what we do in practice. But I think that the face validity of root cause analysis is restricted to finding out what the problem is. It's clear that individual investigators and institutions have discovered things that cause problems, and some of those things were surprising. And they have learned from this. Doing a root cause analysis can help you realize that medicine is a system and that the system is flawed—the system causes errors to occur and problems with safety. However, I think the face validity doesn't extend much further than the initial analysis. When it comes to finding solutions and then following up on whether those solutions were adopted and whether or not they were effective, that's where things fall down. Any institution you speak to will admit that they're much better at finding causes than finding solutions. In general, there tends to be little follow-up to see if improvements can be demonstrated.

RW: So is that an argument to study various follow-up methods or to scrap the process until there's better evidence?

AW: I think it's too late to scrap the whole process. But you can make an argument for at least starting to follow up and to systematically collect information on what actions are recommended and what actions are taken. And if possible, particularly for problems and outcomes that are pretty common, to follow up to see if there's any evidence that outcomes are improved.

RW: So let me push you to disentangle some of the arguments you made in the paper. One is that individual organizations don't seem to follow up on their root cause analyses very well. Another thread says that everybody is rediscovering the wheel in a very inefficient way. And we don't have a method to roll up the solutions into something that will become more robust because it must be done at a higher level, national level, for example. How do you think about those issues?

AW: I would say that they're three issues that are largely separate. The first is that in many cases the people who do root cause analysis are not trained well enough. Consequently, the results that come out of many root cause analyses are not that useful. The second problem is that there tends not to be much follow-up. So there's no way of knowing what has happened. When people try to make changes, like most other attempted fixes, the fix works for a few days or a few weeks and then you lapse back to your previous behavior. The third problem is reinventing the wheel. For example, virtually every institution has problems with, for example, medications being incorrectly administered, and almost every organization tends to come up with its own local solution. The problem really needs a higher level fix.

Here's an example that we gave in our paper. The patient was receiving patient-controlled analgesia (PCA), which includes a local anesthetic and a narcotic. This is supposed to be given into the epidural space. Unfortunately, the nurse connected the tubing to an IV [intravenous] catheter. The patient did not succumb to this, but it could have been a lethal episode. The root cause analysis identified a number of problems, but what the team really wanted to do was to prevent tubing for an epidural infusion from being connected to an intravenous catheter. However, they felt that they couldn't do that, so instead they reeducated their staff. They took some actions, but if you ask anyone in the institution if this was likely to be effective, they were not very secure that they had done anything worthwhile. In fact, a year later there was almost an identical incident at the same institution. And this happened after a lengthy root cause analysis, which took perhaps 100 hours to perform. A number of policies and changes were made, but things were not safer. What ideally would have been done was that someone at a higher level than the individual hospital would realize that this was a problem and perhaps make a recommendation to all the manufacturers of tubing for PCA and identify this as a problem that should be eradicated. This kind of solution has been achieved in aviation. But it has been achieved perhaps only a few times in medicine.

The Department of Veterans Affairs is capable of doing this and, in some cases, does it. All of the root cause analyses from their hospitals are reported to their National Patient Safety Center. Several people monitor what comes in. If they notice a particularly prevalent problem, they collect the cases, try to see if there are common elements, and then design and post a solution. When things work at the VA, I think they could be a model for the way things should go for general hospitals in the United States.

RW: What are we learning from systems like the VA or the state of Pennsylvania in terms of whether these roll up clearinghouses for the results of root cause analyses are able to do what it sounds like you hope they could?

AW: Well, I think we're learning a few things. First of all, if you talk to people off the record, they will tell you that a large percentage of the root cause analyses presented to any big organization are not very useful. When you examine them closely, most really haven't been done as well as they ought, and the findings are difficult to interpret. They're difficult to combine with one another, so that you could find a common solution. This is a problem that even the VA frequently confronts. Another thing that the country of England and large states like Pennsylvania have discovered is that it is difficult to analyze millions of cases reported. It's a little bit like drinking from a fire hose—it's difficult to figure out the patterns of the individual drops when you're drowning in thousands and thousands of reports.

RW: Is there a solution to that? It seems to me that is to some extent the crux of the problem. At some level you would like as much data as possible from these local analyses to be rolled up to something larger. At another level you very quickly get overwhelmed with noise-to-signal problems.

AW: The problem with general incident reporting has more to do with pattern recognition. Most hospitals in the United States do several root cause analyses a year, and there is not an indigestible number of root cause analyses. It would be possible to look at these very detailed analyses, if done in a standardized way, and to classify them, even using some form of automation. The VA is now using some artificial intelligence tools to search reports. The reports bear more resemblance to one another than the myriad of incidents reported all over the country, and they are able to extract themes automatically. They then use experts in patient safety to look at what may be a signal and to help interpret what's going on. One thing that is necessary is to have patient safety experts skilled in root cause analysis and in handling this kind of information who are constantly looking over the data.

RW: In your perfect world, does an organization exist today that could receive all of these root cause analyses, sift through them, analyze them, and come up with broad solutions, or would that have to be invented?

AW: That organization doesn't exist today. Such an organization would have to have a few things. One, it would need to have enough experts, enough resources to be able to actually handle the raw data. The second is that there would need to be a process to convince people who could actually change things at a high level, for example, manufacturers, professional societies, other health organizations, that redesign is necessary and to actually compel them to do it. It may be that we need legislation so that in cases that were deemed to be widespread enough or serious enough, manufacturers would be required to get together and fix the problem. But I think that this can't just be legislated. All of those groups would need to participate in some way or another. It's probably in their interest that things be safer, but at the present time it's simply easier to say well, we get nurses who are smarter or physicians who are more capable and why don't you just do things right. That unfortunately is not likely to be an effective solution.

RW: From what you've seen or read, what does the best root cause analysis process look like?

AW: First of all, the best root cause analysis employs people who know how to do them—have been properly trained and can do them efficiently—because they can be very time consuming. They can take hundreds of person hours to do. And also, that those performing root cause analyses have access to clinical experts so that the analyses can be sensible. The solutions that I've proposed can at least have a chance to solve the problems. What then needs to happen is the institutions need to track what the solutions are, and they need to be accountable to show that those solutions have been put into place. Ideally, they should try to see if there's any evidence that a particular type of incident has been reduced. Root cause analyses are usually touched off by pretty serious incidents, called sentinel events, and any hospital board would like to see these things never happen again. At every meeting of a hospital board, it would be reasonable to present the sentinel events and the root cause analyses and recommendations. And at every meeting, the board should also track what had happened to the previous quarter's or previous year's root cause analyses and recommendations, and again, if possible, what has happened to that kind of patient or that sort of incident, as much as can be known from evidence from the hospital's data systems.

RW: Of course there's a statistical problem here, where some things that are sentinel events are unusual but horrible events. In most hospitals, the difference between it never happening again and happening once is not statistically significant.

AW: Well, absolutely. Wrong site surgeries are a pretty good example of this—they happen to every big institution a few times a year but not often enough to make into statistics. Perhaps the best that can be done is to see that the solutions that were proposed were indeed put into place and perhaps are still operating. Another thing is to try to collect data from other sources. One possible source would be regular surveys of frontline health care workers who could at least tell you what their perceptions are of safety and perhaps of compliance with particular important safety measures.

RW: So you're basically saying that when the outcomes are very rare, you might think about whether the processes or structures are robust, and then the frontline workers are essentially proxies for the outcomes, because statistically those outcomes will be too unusual.

AW: I think so, and I think this is a case where the wisdom of crowds can be helpful. Frontline workers observe things happening all the time and, if surveyed in sufficient density, can tell you how safe they think things are and how well they think safety measures have been adopted. Bryan Sexton and others have shown that those impressions correlate very well with other measures of safety.

RW: Some people have read your article in JAMA and wondered, maybe this is not a reasonable method. We're putting a lot of time and energy and resources into root cause analyses. Would you tell them today to stop doing them or to work on making them better?

AW: I think the horse is out of the barn. There is much that is good about root cause analysis—the basic idea of not looking for one root cause, but instead looking at the factors that allow safety incidents to occur. I think that people are educated about the fact that there is a health care system, and that system causes both safety and lack of safety. So I wouldn't throw out the whole process. It does behoove us to make it better and to study it. I think that we ought to follow up systematically; organizations or individual hospitals and groups of hospitals and hospital systems, where they exist, should track what's going on. And root cause analyses ought to be kicked upstairs somewhere central, so that they can be looked at in aggregate, and so that those rare events could have a greater chance of being detected. Also, we ought to do some research on what kinds of solutions might be both effective and doable for individual institutions and groups of institutions. Some solutions are at the level of making a policy or retraining an individual clinician. Others are at the level of redesigning the entire system or piece of equipment. And there's quite a lot in between. It would be worth studying what kinds of problems could be resolved by a relatively low-level solution and what kinds of problems really do require a higher level solution in order to make any difference.

RW: How do you balance the tensions between looking for system flaws, as root cause analysis forces you to do, and the recognition that sometimes the problem may be more individually based?

AW: I don't think there's quite that much of a conflict. Systems are built up of individuals, and individuals are a part of a big, complicated system. Sometimes the solution is improving the performance of an individual. And individual accountability, individual feelings of responsibility, and high moral fiber are crucial to the whole system working.

 

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources