Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation with...Diane Rydrych, MA

June 1, 2007 
Save
Print

Editor's Note: Diane Rydrych, MA, is Assistant Director of the Division of Health Policy at the Minnesota Department of Health, where she oversees their successful and influential adverse health events reporting system. We asked her to speak with us about the Minnesota initiative and some of the broader lessons for state error reporting systems.

Dr. Robert Wachter, Editor, AHRQ WebM&M: Describe what the Minnesota reporting system is and what it does.

Diane Rydrych: Our system requires that all Minnesota hospitals and ambulatory surgical centers report to the Department of Health whenever any one of the NQF [National Quality Forum] "never events" takes place. They report to us within about 2 weeks after the events are discovered, and then they also submit RCA [root cause analysis] and correction action plan information to us, which we evaluate and work with them on improving, through an iterative process. We publish an annual report that shows the number of events that happened in each facility, the types of events, and the outcomes of each event. We also do a lot of education in the facilities and for consumers.

RW: What are your mechanisms for education? How do you reach out to the hospitals and to the public?

DR: A lot of different ways. We contribute to a quarterly newsletter produced by the Minnesota Hospital Association that keeps hospitals and surgical centers up to date on any changes to definitional guidance, how the reporting system works, or how they're going to receive feedback and respond to our feedback on their RCAs. We send out safety alerts when there's an issue we believe they need to be aware of based on the reports. Then a couple of times a year, we go to different parts of the state to do day-long in-person training for them, focusing on how to do a thorough RCA and how to develop a stronger corrective action plan.

RW: When you send out a safety alert, would it be because you've seen this thing happen once or because you've seen a pattern? In either case, can you give an example of one?

DR: It really depends on the case. There are times when we've seen something once and felt that was enough to put out a safety alert. For example, we saw a case of suicide by hanging in an inpatient psychiatric facility and felt that the information about structural issues within a facility that led to that was important enough to merit an alert. In other cases, it is based more on a trend. For example, we had a safety alert about retained objects in labor and delivery, based on having seen a cluster of retained sponges after delivery. We thought that perhaps this issue had been flying under the radar, and we wanted to bring it to people's attention.

RW: Talk a little bit about the genesis of the system and maybe even the politics. Who proposed it? How did it make it through the legislative process?

DR: In Minnesota, we have something called the Minnesota Alliance for Patient Safety (MAPS), which was established in 2000, right after the IOM [Institute of Medicine] report came out. It was founded by the Department of Health, the Hospital Association, and the Minnesota Medical Association and had as its mission to be the state coordinating body on patient safety issues. That organization started talking about getting in front of the IOM recommendations on mandatory reporting. The legislation came from MAPS and was supported by the Medical Association, the Hospital Association, and the Department of Health from the get go, as well as by the Governor, which probably made it a smoother process than in most other states. In other states, it often is a health department that is going to be responsible for administering the program, and they're often not really on the same page with the Hospital Association from the beginning. So it ends up being a bit more of a combative process than it was here in Minnesota.

RW: It's a little bit surprising that the hospitals would have been on board from the get go with the idea of identifying individual hospitals. How have they reacted as that's played out?

DR: With wariness.

RW: I would think so.

DR: There was certainly apprehension for the first couple of years. Nobody knew how the public and media were going to react to having individual hospitals named and having a number of deaths or serious disabilities associated with that hospital. Once it happened and once the first report came out, I think everyone realized that it wasn't going to be that worst-case scenario that they had feared. In general, the coverage each year of the report has been fairly positive and fairly good at understanding what the point of the system is. Which is, that we need to have more transparency and we need to have these conversations about safety and about why events are happening. In general, I think the media has done a pretty good job of keeping the numbers in context, where there are x number of events per x number of patient days or admissions, as well as keeping the focus on learning from those events rather than solely on the numbers. The reaction to it has helped to reassure the facilities. However, there's still apprehension every year when it's about to come out, especially for new facilities that haven't been in the report before.

RW: But it's never a surprise to a facility. They've reported the events to you so they're just worried about what the consequences will be.

DR: You're right; it is no surprise for the facility because they've reported it to us. They've been through an evaluation of their RCAs and they've had a chance to review what's going to appear in the report, so they know exactly what it's going to look like. But the report gets coverage in our larger Minneapolis and St. Paul metropolitan newspapers and in local newspapers around the state. In certain communities, there's a concern that the local paper is going to focus on you or target you because of something that happened. And some articles have been less positive or targeted facilities a bit more, but even so, those worst-case scenarios haven't happened.

RW: As I have gone through the data, it's interesting that the vast majority of hospitals seem to have one to three events per year. Are there any grounds for skepticism on the accuracy of the reports—perhaps hospitals feel that they need to report some errors, but not too many? And what sort of auditing function is done? Do you wonder whether some major errors are still unreported?

DR: That's one of the big unknowns. It's something that we have struggled with here in Minnesota and that other states have as well. At least now, we have somewhat of a benchmark with the report, but we've never really known what the true number is. I don't think we have any reason to believe that there's any deliberate non-reporting. So I don't subscribe to that skeptical viewpoint—that one to three is a safe number. We have had a couple of facilities, in fact, each year that have reported a larger number of events. We work closely with the hospitals, enough so to know that they're very earnest in identifying reportable events and wanting to understand why they happen. So we've always had that positive relationship, which helps me to not be skeptical about it. But that said, I think in any state where there's a reporting system like this, you cannot know if everything is really being reported. In certain categories, particularly with something like pressure ulcers, there is almost certainly underreporting. Are pressure ulcers not being discovered, or are they not being correctly staged so that people know that they're reportable?

RW: Which of course is part of the issue, a better hospital might be more assiduous about finding things like pressure ulcers and other errors.

DR: In this year's report, we did have a few facilities that were willing to tell their stories. One was a facility that had a high number of pressure ulcers. They told their story about how they have implemented a really innovative multidisciplinary skin care team that conducts skin care rounds. As a result of that really concerted effort to identify pressure ulcers, they have identified many ulcers that they probably wouldn't have found before. Now that their system is more established, they're hardly finding any. So they saw an increase, and I think they were rightly concerned about how that increase was being perceived.

RW: A number of other states have taken your example and run with it. As they do that, have you seen any new variations on the theme or interesting learnings from different ways of organizing state reporting systems?

DR: Unfortunately, right now there's not quite as much sharing of information across states as we might want. You know, there are groups like the National Academy for State Health Policy (NASHP) that are trying to bring states together and share learnings, but on a day-to-day level it would be great to be in consultation with people from Connecticut or New Jersey, for example, and try to sort out some of these issues and see where we're having the same challenges and where we're not. Since every state is putting their own spin on it, they're all having their own challenges with implementation. But I think we're all experiencing some of the same things once we get going. Some of the challenges related to how do you define these events. They look pretty black and white when you look at the NQF list of 27. But when you're really putting it into practice, you need to interpret a lot of gray areas for people. I think other states are struggling with that as well. They're struggling with how to present the information to the public in a way that's meaningful. They're struggling with that question of denominators and how do you know how often these events really are happening. They're struggling with numbers that may be increasing and they don't know whether it's a real increase or whether it's improved reporting. A lot of those same issues are happening in other states.

RW: How would you compare or contrast the advantages or disadvantages of your model and that of other states, for example, Pennsylvania? [Editor's note: Pennsylvania's system requires reporting of "serious events" and "near misses" and receives nearly 200,000 reports yearly.]

DR: The advantage of a system like ours is that it's manageable. We get a certain number of events reported each year. We have a team of people who look at every RCA and every corrective action and review them and make sure that they're thorough enough. So we feel like we're really able to focus on what's causing these events, what's being learned from them, and being able to provide education and feedback to the hospitals. We feel like we can do a good job with that. Pennsylvania's system gets a remarkable amount of information, but I know that they can't be as in depth on some of it as we can, just because of the large numbers. There are advantages and disadvantages there. I look at the newsletters that they put out—they do a wonderful job of providing information back to facilities and learning from what's submitted. But I wonder sometimes whether it becomes overwhelming for them or for facilities that are getting this information—that struggle to find out what to focus on. The downside of our system is that it only captures that narrow list of never events. I get calls sometimes from patients or family members when something really sad has happened to them in the hospital, and they want to know, is it going to be in the report? How is it being dealt with? And I have to tell them that this system doesn't capture that really unfortunate event. So that can sometimes be a frustration.

RW: If the Joint Commission or CMS [Centers for Medicare & Medicaid Services] or NQF as nationally based organizational bodies said, "Wow, look at what Minnesota's done. We think it's terrific. Let's create a system of national reporting of Never Events," would there be an advantage to state entities continuing to do it because of the local touch? Or would that be a better world because one system would be standard everywhere?

DR: I think it would make a lot of sense to have state organizations be involved—personal touch is a good way of putting it. We know the facilities that we're dealing with and they know us. With this type of information, trust issues are huge. For us it was a challenge, as primarily a regulatory body, to be the one collecting information and to try to convince people that we were really going to focus on learning and quality improvement rather than bringing the hammer down. That trust issue is going to be magnified if it's a national body working on this type of information. If we had a consistent system that was adopted nationally, I think that would be wonderful. That was why we went with the NQF model in Minnesota—the hope that in the future a national standard for collection of adverse event or error information would be emerging. The ability to do more sharing of information and have more consistency across states would also be a real leap forward in terms of what we could learn about why these events happen.

RW: Let me ask you one last question. Your report says, "The information in this report should not be used to compare the safety or quality of facilities." If a relative called you and said, "I want to figure out whether I should go to hospital A or B in Minneapolis-St. Paul," and A or B had wildly different reported rates, what would you tell them?

DR: I would tell them that this is one piece of information that they should look at, but that there are other pieces of information that are also important to consider. One thing that we've learned is that there are a lot of reasons why these events happen and why they might be higher or lower in one year. I do think it's important for consumers to look at this. We printed a consumer guide this year as a companion piece to the report, just to acknowledge that what we're doing is so highly technical that most people can't understand it. But I think the report is more valuable to consumers if it's paired with other types of quality information. For example, if you're going in for a heart procedure and you can find information on outcomes of patients who have had similar procedures or if you can find information that's relevant to your particular situation—whether it's asthma, diabetes, high blood pressure, or whatever—that information is going to be very valuable as you make your decision. So it's an important piece, but it's one piece that they should look at. That's the message that we've tried to get across in the report. Not that people shouldn't ever use it to make comparisons, but that it's more valuable to take it in the context of larger sources of quality and safety information that are out there.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources