Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation With… Kaveh Shojania, MD

November 1, 2015 
Save
Print

Editor's note: Kaveh Shojania, MD, is Editor-in-Chief of BMJ Quality and Safety and Director of the Centre for Quality Improvement and Patient Safety at the University of Toronto. He was one of the founding editors of both AHRQ WebM&M and AHRQ PSNet. He was also lead editor (and authored six chapters) of Making Healthcare Safer, the evidence report produced for AHRQ following the publication of the Institute of Medicine report, To Err Is Human. We spoke with him about the evolution of patient safety research over the past 15 years.

Dr. Robert M. Wachter: What did you think the role of research would be in patient safety 15 years ago? Did you have any sense of how important research would be and what unique aspects of research might emerge as we tried to tackle something like patient safety?

Dr. Kaveh Shojania: The answer is complex. When this started, there was a bit of tension between what you might call the researchers and the evangelists. By evangelists I mean the people who were saying that we could learn a lot from other industries. Frank Davidoff wrote a piece a few years ago where he dug up a paper from David Sackett, the grandfather of evidence-based medicine (EBM) and clinical epidemiology, where he coined the phrase "snails versus evangelists." He was talking about the debates in public health at the time around doing cancer screening for everybody; those are the evangelists. The snails said we should study this because there are opportunity costs, unintended consequences, etc. And Frank was talking about how it was interesting that this same debate had emerged in health care quality and patient safety, and he went on to describe sympathetically both perspectives—the "just do it" improvement advocates versus the more restrained "we should be more sure this actually works" academic types.

When you ask about patient safety after the IOM report, To Err Is Human, we certainly saw that tension play out. My early thoughts were that the role of research would be to test some of these putative interventions. Things like: We should go and do root cause analyses on everybody. We should have a no-blame culture. We should ban certain abbreviations. We should computerize everything. All these things varied in their level of complexity and were mostly drawn from analogies from high-risk industries—I thought there would be a role for testing these.

Even putting aside the debate between improvement evangelists and more research-oriented snails, some research on clinical interventions to prevent hospital–acquired adverse events, VTE prophylaxis, or falls amounted to a form of patient safety research. The main thing I didn't anticipate, which was a pleasant surprise, is how much multidisciplinary research is advancing the science of patient safety. Some of the Sociology meets Psychology meets Clinical Trials Design that we've seen in the last 5 to 10 years.

RW: How has your thinking evolved in terms of this tension between the just-do-it camp and the we-need-evidence-about-everything camp?

KS: I really do think we're richer for that debate. The big difference between what we're trying to do here versus in the rest of clinical research is that in clinical research it's okay if things fail. A medication is in fairly widespread use; the researcher's job is to provide evidence to justify whether this use should continue. A negative study in that context has value—it shows that we should not be exposing patients to a medication that has little benefit, might result in harm, and certainly costs. But, most patient safety interventions, and quality improvement strategies more generally, are not already in widespread use. You're not doing anyone a favor by rigorously evaluating something that's not going to work. There's more of a collaborative role to be had between the evaluators and the implementers than I had previously appreciated. No one else is doing what you're trying at your hospital. So instead of rushing to a rigorous evaluation of something no one else is doing just to show that it doesn't work, why not optimize it first? What I've learned is that a happy marriage can be had between these people. The evangelists were shouting stuff from the rooftop and rushing ahead with things that probably were no good. And the researchers, including myself, were probably making the mistake of investing all of our intellectual energy in the evaluation and not in making sure that the thing even worked.

The way my thinking has changed is that there's much more of a role for first making sure that the patient safety intervention in question is as good as it can be, and that it can be evaluated with a rigorous enough methodology that also engages people who were actually trying to do it. Then if it looks like it's promising—if it's going to be an accreditation standard or some other widely recommended intervention like with medication reconciliation, rapid response teams, central line bundles, and so on—then of course there's a role for a trial if it's an expensive or high-risk thing. It is possible to marry those two perspectives in a way that I hadn't thought possible 15 years ago.

RW: Another tension is the degree to which a patient safety intervention can be "proven correct"—a single article or a small number of articles—and become a regulatory or accreditation requirement. That might be a method of getting it out there quickly and disseminating it faster than other clinical practices, and that's good if it's a good thing and bad if it turns out to be a bad thing. How do you think through that issue and that fairly unique aspect of patient safety?

KS: There are definitely some similarities and some differences with the rest of clinical medicine. In clinical medicine we've tended to err on the side of doing trials over and over again. There have been famous examples where numerous trials were done and they didn't need to do, for example, the last 10 trials to prove that thrombolytic therapy works for acute MI. Frankly, even when we do have proven interventions, it takes forever to get them into practice. We don't want to make that same mistake in patient safety.

That said, John Ioannidis and others have shown that sometimes an initially highly positive study gets totally overturned when other people start studying the same thing. We have seen some famous examples like perioperative beta blockers where, by the standards of patient safety, five randomized trials showing that perioperative beta blockers were a good thing—a huge amount of evidence. But by clinical standards, in cardiology especially, they were five small trials. Then POISE came out and overturned the whole thing. Like everything else, you have to consider the risks. If you're talking about something like abbreviations to avoid, well there's minimal effort and little chance of harm. Maybe it's just a slight inconvenience for some doctors.

It's like that with a lot of patient safety interventions. Medication reconciliation is an interesting example. It's nice to think that there's no harm, but actually there might be, because often in practice, as opposed to in research settings where people are really enthusiastic about medication reconciliation (and almost all the studies have had pharmacists perform medication reconciliation), we just do the same lousy medication history that we did before and now we call it "the best possible medication history." So you almost are hardwiring some problems into routine care by saying medication reconciliation has been carried out. And you also need to consider what are the opportunity costs, and what could go wrong with this intervention if we rushed to disseminate it? The other issue is: Will this really work in most places? Are you widely disseminating something where the context really matters? Complex interventions may not work everywhere due to financial, infrastructure, or interprofessional collaboration reasons. If you need to decide whether to widely recommend or mandate something, you need more information on how it plays out in different clinical settings. It's not that different from clinical research. I definitely am more open to the idea that there will be times that we don't need that much evidence. But again it still comes down to the stakes involved and what the price of being wrong could be.

RW: I'm about to start the third edition of my safety book. When I went from the first edition in 2007 to the second edition in 2011, there was a massive change in the literature in patient safety to the point that the book was 50% thicker. Here were the things that were very, very different from the early years to the middle years of the safety field. Our understanding of IT, thinking about rather than talking about errors so much, talking about harms, the global trigger tool, the emergence of the checklist, new safety targets that emerged, the tension between no blame and accountability. I'm trying to do the same thing now, and I'm struggling with coming up with as many big paradigm shifts over the last 3 to 5 years. I wonder if you have ideas how this field is emerging and evolving. Have we reached a plateau where we're working on the same stuff (and maybe we're getting a little smarter about it), but the paradigm is pretty well established?

KS: Even though everyone waits for some big paradigm shift, most progress is incremental. That's what we've seen in patient safety. The central line bundle is a wonderful thing that Peter Pronovost has championed. But realistically, that's a tiny problem. But at least we have a solution for it. Someone else will do something with some other problem, right? There will be modest incremental improvements. Obviously, CPOE systems will eventually deliver their promise and hopefully not kill anybody. That's the way medicine usually progresses. A new breast cancer drug comes out, a new treatment for MI, some new type of surgery. But lately I have been rethinking it a little bit more. I'm wondering if fundamentally we do need to hunker down and say, either we will do deep cultural change, or if everything else—one narrowly targeted intervention after another—is just like rearranging deck chairs on the Titanic. I feel like you can sprinkle around some of these patient safety initiatives that target very specific problems, but I don't really know that we'll be that much better off.

There's no question that the attitudes of people have changed in academic centers and even in nonacademic centers. I go to rounds now and sometimes don't emphasize to my residents and students that safety is my area of interest. And people will spontaneously talk about errors they've made. We were at an M&M conference where the first phrase out of someone's mouth was "I think I made a really bad mistake last week. I forgot to do such-and-such, and now the patient had this happen to them." And this is a resident saying this in front of a whole room full of people.

RW: I have residents in our M&M say that "Here's what I thought was going on, but I was worried that I might have been anchoring."

KS: Yeah, it's amazing. I make the analogy with EBM as a cultural or a sociological movement. In the late 1980s, it seemed like we were going to try to make everyone into Gordon Guyatt or David Sackett [two EBM pioneers]. But obviously that was foolish. The vast majority of clinicians don't know anything about clinical epidemiology or that much about evidence-based medicine. But they know the language and they're sympathetic to it. Maybe that's what will happen with patient safety; the next generation of clinicians will all have grown up with this language and these ideas. That in itself will be an improvement.

It sometimes seems like a whack-a-mole approach. You hear about this problem and so you try something and maybe you solve that one problem. Of course it's nice if you can solve that problem, but there are hundreds of other very specific problems, and then there are also the deeper levels of problems related to communication and teamwork. I do wonder, even though I shied away from this 10 years ago, if culture, communication, and teamwork are the next wave.

Ken Catchpole wrote a piece for BMJ Quality & Safety on checklists. He pointed out that the way in which checklists were used in aviation is often misunderstood in health care. For instance, they weren't trying to do all the teamwork and communication things that we try to do with checklists in health care. They didn't do that because they were already doing teamwork and communication separately and very intensely. We say that the surgical checklist isn't just a checklist; it's really a teamwork and communication vehicle. But that's odd because checklists are supposed to be just checklists, right? Because when I go to the grocery store I don't want to have a shared mental model with the cashier or with my wife or anybody else. I just want to look down and remind myself to buy eggplant or whatever it is. I feel like that's what we need to do in health care now. We've realized the importance of teamwork and communication. Maybe now we really need to embrace that and think about what interventions will deeply improve those types of behaviors so that other more concrete interventions will work.

RW: One of the low moments of the safety field was when the Landrigan study came out and showed that nothing had improved in a bunch of North Carolina hospitals. Over the last couple of years other studies have shown at least some evidence of improvement. Where do you think things are now?

KS: I think we're seeing the limitations of adverse events as a metric. Charles Vincent has been writing about this lately, as has Eric Thomas. One of my colleagues and I just wrote a piece about this too. Adverse events rates launched patient safety, and I can't imagine a better metric early on in the field. In some areas where we haven't done any work, it still makes sense to use adverse events as the main metric. A couple of years ago BMJ Quality & Safety published a home care adverse event study. There was the pediatric adverse event study 4 or 5 years ago. It gives you a general lay of the land, but it's not a very good outcome to use to look at progress. The reason is that the adverse event rate is a very heterogeneous measure; basically all that these events have in common is that they were injuries or harms due to medical care. So you have surgical problems, medication problems, diagnostic problems, infections, and so on. Not every preventable adverse drug event will be related to ordering or administration or dispensing. You're measuring a heterogeneous mix of outcomes that won't be addressed by the same interventions. You could invest years in getting your CPOE system to work and maybe produce a reduction in preventable adverse drug events at the ordering stage. But, you would still have all the other medication problems and tons of nonmedication problems. That improvement in preventable ADEs may not show up at all in a broad look at all adverse events. So, even though the adverse event is considered the gold standard, it's a suboptimal gold standard.

Once you're pretty sure what the problems are, it makes more sense to measure those problems. So if you've done something to improve medication safety that was mostly related to ordering errors, obviously you should measure those. If you've implemented the central line bundle, then you should measure central line–associated infections. If you're looking for improvements, you need outcomes that relate to them—you cannot go back to this omnibus blunt measure like an adverse event. All that said, the other explanation for Chris Landrigan's study and other similar ones is that most hospitals haven't implemented a lot of effective interventions. It would have actually been a shock if they had shown an improvement. But now we're also seeing that, even in settings where we think some reasonable things have improved, you probably won't detect them with either the trigger tool or, even the more research-oriented version of it, the classic Harvard Medical practice study adverse event outcome.

RW: How do we know that we're not all getting frustrated and then saying that those measures that we said were great measures aren't good enough because they're not detecting the improvements that we are pretty sure that we're enacting?

KS: If you think you've made some serious improvement in medication safety—you implemented barcoding in a really great CPOE system with decision support—you'd be crazy to do an adverse event study. It's a waste of time. It's for sure not sensitive enough and may not even be specific. So what you want to do is say these are some well-defined medication harms. We're looking for opiate harms, bleeding harms, this harm, that harm. And it becomes more like regular clinical research then. In clinical research, we don't use some general measure of heart health. At some point you start saying, we're talking about acute MIs, or we're talking about admissions to hospitals for heart failure. That's where it's actually a testament to the progress we've made. And these measures are meaningful to patients and meaningful to clinicians, which relate to what we've improved on. I think that makes more sense to document progress.

RW: When all of this started the world of clinical care was paper based and the world of evidence was largely paper based. They're both electronic now. How has the electronification of the literature and the ability to collect data electronically changed the nature of research in patient safety?

KS: In 15 or 20 years a lot has happened in research in general, and in the electronic dissemination. I would say that as a journal editor, social media and the impact that has had on research dissemination is palpable. The electronic strategies now have created a larger group of people who can keep up with what's going on.

PSNet is a great example, right? You think about all the people who get weekly MyPSNet emails. That's a very different group of people than the ones who would previously had religiously gone and picked their favorite journals or the ones most relevant to their field and just had some idea of what's being published, never mind actually reading it. For a field like patient safety, an easier way of disseminating things is particularly important. I know one of the reasons that you wanted to start PSNet in the first place is that the nature of the field is so heterogeneous that—who could possibly keep track? If you imagined the most dedicated academic from 20 years ago, maybe they would have 5 to 10 journals that they would watch for. But what are you going to do if you're in patient safety? You have everything from Harvard Business Review to the major clinical journals to health services research journals, now some safety and quality journals, nursing, pharmacy, law journals.

The chance of being able to keep up in patient safety is vanishingly small. Not even from a volume point of view, but just from the breadth of the sources of the potentially interesting information: the occasional sociology, science, and medicine, all these different sources. You'd never be able to have a service like this be as effective 20 years ago. But now that everyone is so plugged in to using multiple ways, from social media to regular email to RSS feeds, it works really well for a field like patient safety. Especially with a source like PSNet ready to do all of the culling from all of the different places that patient safety information might come out of.

RW: When you talk to your trainees and try to give them a few hints about how to keep up with the literature, are there any things that you've discovered over the years that have been particularly useful?

KS: Well, honestly, and I'm not just saying this because of your role, there's no reason not to use PSNet. Most of the trainees now know more about how to keep up with various electronic sources than I do, even though I don't feel like I'm that far out from it. But I feel like for patient safety, this is a case where the human component is crucial. I used to do a lot of literature searching and systematic reviews and I'm pretty good at it. If I want to know if something new has come out on venous thromboembolism, I can easily create an automatic alert for that.

An automatic alert for all of patient safety is pretty hard. There's just too much, even on a weekly or biweekly basis. The human screening that takes place with a service like PSNet is crucial. So I recommend some combination of finding the right secondary source—in this case PSNet—and following certain people on Twitter.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources