Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation With… Ashish K. Jha, MD, MPH

May 1, 2013 
Save
Print

Editor's note: Ashish K. Jha, MD, MPH, is professor of health policy and management at the Harvard School of Public Health and associate professor of medicine at Harvard Medical School. He has published widely on policy issues related to safety and quality, and is the senior editor of the new journal, Healthcare: The Journal of Delivery Science and Innovation.

Dr. Robert Wachter, Editor, AHRQ WebM&M: Tell us about what you think the state of research is in patient safety. I'm particularly sensitive to the theme that's come out in the last few years that, despite a lot of effort and resources, we haven't made much progress.

Dr. Ashish K. Jha: We have certainly not made the kind of progress that we would want. But there are a couple of things worth noting and celebrating. The most important is that our community has gone from looking at errors as purely the fault of the individual to something much more systems oriented. My sense is that if 15 years ago one said, "Errors are caused by systems," people would look at you funny. Today when we say that, people nod and have a sense that's the right way to think about this. From a process point of view, we're starting to get some traction. But when it comes to outcomes for patients, it's hard to celebrate where we are, and we really do need to move forward much more quickly over the next 5 to 10 years.

RW: How do we explain that? It feels like a lot of effort has gone into this. We've implemented all sorts of systems and processes and yet the Landrigan study a few years ago showed that the needle hasn't moved at all.

AJ: Some individual organizations have made tremendous gains. There are two separate notions. One is why haven't those gains really spread throughout the health care community? The second is why haven't even those very leading institutions been able to solve all of the problems of patient safety? In my mind the fundamental problem is around leadership and incentives. When I look at what Pronovost did initially at Johns Hopkins then for Michigan, it's a herculean effort and it takes someone with Peter's charisma, focus, and creativity to pull that off. It's been 4-plus years since his landmark paper in the New England Journal of Medicine that showed the median central line infection rate in Michigan fell to zero. The most important question to me is this: why didn't every hospital in America take that up the next day? Why have people had to work so hard to get hospitals to adopt a cheap and easy intervention that saves lives? Until we answer that question, we're really not going to make the kind of progress we need.

RW: What have been the obstacles to that kind of spread? To the degree that you think they relate to incentives and the lack of a business case, what does the system look like that builds in the right kinds of incentives?

AJ: In a survey a few years ago, we asked hospital board chairs to prioritize what they thought was most important for the board to oversee. Only about half the hospital board said quality was one of their top two priorities. What that says to me is that the board, which is the ultimate accountability entity in a hospital, doesn't think quality is part of its business. And if the board doesn't think quality is part of its business, and quality in this case includes patient safety, then the CEO and top leadership probably don't think it's part of their business either. Until safety becomes part of the core mission of an organization, it's not going to get the kind of traction that other things do. You know, people adopt technology quickly, even though it's very expensive, because they have the incentives to do it. They know that they can recover their costs and then have it be a substantial moneymaker. Patient safety has not been a focus for organizations because fundamentally there's no strong incentive to do this.

RW: We've now gained some experience in three areas: one is not paying for certain kinds of errors or harms, the second is public reporting, and the third is pay-for-performance. Can you distill what you've learned about those three maneuvers insofar as they have changed the incentive in the boardroom?

AJ: We have these three main policy levers as you suggest. What we've seen with public reporting is that it has led organizations to focus a lot on process measures. The compliance on these measures has taken off. We're giving aspirin and beta-blockers pretty much universally now, and you can mark that as some level of success. When people have tried to look at whether public reporting has improved outcomes, the evidence is pretty skinny. My interpretation? Public reporting reaches the level of the chief quality officer or the medical director who finds it in his or her budget to hire a nurse to make sure that the hospital gets compliant with those measures. I don't want to paint too broad a brushstroke with this. Clearly some hospitals have taken this on as a real signal to do improvement and have made substantial gains, but nationally you primarily see improvements in process, but little improvements in outcomes when it comes to public reporting. The story on pay-for-performance has been more or less the same. We've tried a bunch of different pay-for-performance schemes. The fundamental problem with most of them is that they are pretty underpowered. The financial incentives at stake are 1%, 2% at most, and at that level of financial incentives, hospitals are not motivated to fundamentally redesign the way they deliver health care. What they are willing to do is focus on some processes, make some modest changes, and get compliant enough to get some part of that bonus. But to fundamentally rethink the way they deliver health care—the 1% to 2% financial risk hasn't been enough.

RW: If you were the czar of Medicare, what would be the right number in terms of pay-for-performance to get people's attention? And what would you think are the policy challenges of getting to that number?

AJ: The unfortunate bottom line is that we don't know what the right number is. What I have called for is a lot more experimentation, but with much bigger numbers. So let's just increase the numbers substantially. Let's put 5% or 10% at risk. Or let's say that if there is a central line infection during the hospitalization, it's not just that we won't pay you extra money for that central line infection. Maybe we should not pay for the hospitalization at all. Things that actually have a real financial impact on the hospital's bottom line are much more likely to get the attention of the CEO and the board, and therefore much more likely to move the needle on patient outcomes. And we have to remember—that's the purpose of health care, to improve outcomes, not just to comply with process measures. So if I'm the czar of Medicare, I don't have a magic formula. What I would do is try three to five different schemes, vary incentive levels, try different targets, and be willing to make adjustments along the way. Study it closely, see what kind of impact it's having, and make changes as the data come in.

RW: You've painted the disconnect between improvements in processes and improvements in outcomes as being an issue around incentives and getting a portion of the organization engaged but not the highest levels. An alternative theory would be that we don't really understand the link between processes and outcomes and we are working on what the evidence tells us to work on, but they don't seem to be translating into meaningful differences in patient outcomes in the ways that we might have predicted. What do you think about that?

AJ: I think those two ideas are very compatible. When you look at acute MI care, we have the most evidence for how to manage acute MIs. Even if you do all of those processes perfectly, there will still be large variations in outcomes across hospitals because we do tons of things for patients with acute MI that are not captured in the process measures. So my feeling is that you're absolutely right, the link between process and outcomes is tenuous. We know how to improve certain processes such as giving people beta-blockers or getting them an angioplasty in a timely fashion. If we're going to improve outcomes, we actually have to get the hospital to rethink its entire approach to acute MI care, or heart failure care, or pneumonia care. That probably means everything from doing a better job of triaging who goes to the ICU versus who goes to the floor. It probably means trying out different models for how you take care of acute MI patients. Should they always be taken care of by cardiology? Should they have more of a team-based approach? The bottom line is I don't know the answers to these questions and neither does anyone else.

What I see is too little experimentation. Entire industries experiment all the time with how to improve their bottom line. Amazon does little experiments every day. They're moving things around on the screen. They're trying this; they're trying that. Of course their goal is to get you to buy more stuff. I don't see that kind of experimentation happening across the 5000 hospitals that take care of acutely ill patients in the US, certainly not experimentation in terms of how to improve outcomes. I don't think there's much in the way of incentives to do those things. If we can put enough incentives on the table, we're going to continue doing the process stuff, because you need to do that. But we're going to actually get people to think differently about how we improve outcomes and maybe redesign the entire way we care for patients.

RW: When you're talking about experimentation, and in essence research, it sounds like you're really talking about at the local level and the problem being the organization either doesn't have the sufficient imagination or incentives or the right org chart. It seems less about federal research funding—maybe research with a capital R?

AJ: I'm not talking about the kind of research that you and I do. I'm interested in the millions of interactions that happen every day; the real innovation comes from people tweaking each of those, people trying out new models. Let's say a hospital figures out how to cut its AMI mortality rate in a meaningful way or figures out how to cut its health care–associated infection rates by 80%, 90%, or 100%, there will be other people who will be interested in that model. As those models spread and, as people try broader interventions, that is when you need researchers with a capital R to evaluate to see if it's actually having a systemic impact. But what I'm thinking about is much more experimentation and research at the sharp end of health care delivery.

RW: It sounds like you think that will only happen if the incentives will drive the organizations to think harder about better ways of doing things?

AJ: I do. I think if we don't have the incentives then you are going to still catch some organizations. When I talk about incentives, people say, "Well how do you then explain the Geisingers and the Virginia Masons of the world?" And the answer is even in a system that has incentives pretty badly aligned, you're going to get a few outstanding organizations. You'll get leaders who are inspired by something bigger than financial incentives, who are going to do this because it's the right thing to do. And that's awesome. I love that. I think it's a great model. Unfortunately, that model doesn't spread very well. If I could replace the CEO of every hospital in America with the CEO of Virginia Mason, the country would be much better for it. But since we cannot do that, the question is: how do you get everybody else on board? And the small number of leading institutions ultimately represent a small amount of care and what I want to see is that kind of creativity and experimentation happening much more broadly than among just these leading institutions.

RW: You've shown that as we switch to outcome measurement, there will be challenges in measuring the care of disadvantaged populations or very sick patients. How concerned are you about that? How good do you think the state of case-mix adjustment is? How good does it need to be in order to have outcome measurement be fair?

AJ: I start off by asking what's the alternative. If the alternative is to stick with the plan we have now, then that really lowers my bar for saying I'm willing to live with the warts that come with risk adjustment methods that aren't perfect. Empirically we've seen that some outcome measures actually do pretty well. If you look at risk-adjusted mortality rates for instance, safety-net hospitals have marginally worse outcomes but it's actually not very different. We worry a lot about risk adjustment for academic teaching hospitals, because they care for the sickest patients. Yet when we look at data, we find that risk adjustment generally works fine for them when it comes to mortality rates. Academic hospitals tend to have lower risk-adjusted mortality rates.

The single biggest thing we need is more clinical data because right now we're primarily using administrative data to capture outcomes as well as the risk adjustment. Given that we're moving very quickly towards having electronic health records deployed widely, I don't think it's a stretch to say that if CMS wanted this, we could soon have every hospital claim linked with a small set of clinical data that would help us dramatically improve our risk adjustment scheme. Answering your question, I think the current risk adjustment system is not perfect but I see very little evidence that certain types of hospitals are going to be systematically hurt by it, especially if we focus on the right outcomes, such as mortality rates. Yes, it needs to be improved and it can be improved, but while we're waiting for that improvement I'm pretty comfortable moving forward.

Some measures are more problematic, the biggest one being the readmissions measure. Here we see much clearer evidence that minority patients and poor patients are much more likely to be readmitted. Institutions located in communities with a high proportion of minority and poor patients are much more likely to be penalized. The reason is because so much of what happens with readmissions is really about what happens when the patient goes home, the kinds of social support they have, what kinds of resources exist in their community. There are some pretty reasonable fixes there, including accounting for the proportion of poor patients or minority patients your hospital might have. So I think these concerns about outcomes measures are real, but they're manageable if we're willing to be flexible. If the alternative is to do nothing at all or to keep focusing on the small number of processes, most of which have topped out, then I'm happy to move towards outcomes and then work towards improving how we capture and risk adjust.

RW: One of the tensions between safety and quality is that while it's relatively easy to measure mortality and maybe case-mix adjust mortality and certain process measures, it's pretty hard to measure diagnostic errors for example, and some of the other issues that fall more squarely in safety than quality. Do you think the state of the art is going there? And do you worry that because it's easier to measure quality we'll take our eye off the safety ball?

AJ: Two things that have really held us back on progress on patient safety: one is incentives, which we've talked about, and the second is a lack of good measures for patient safety. We do obviously have pretty good measures for health care–associated infections. The majority of hospitals now are reporting those to the CDC. I find it completely stunning that almost half the hospitals are not, and it seems to me that one of the easy policy fixes here is if CMS just requires that if you want to take care of Medicare patients you need to report your infection rates to the CDC and have it be public. One of the reasons infection rates have been coming down is because we have a pretty good measure, and people pay attention to it. But once you get beyond that, good reliable measures are hard to find. The best measures we have are based on claims data, and they are just not that great. This is a place where there is a real role for research with a capital R.

If I could invest in one thing in patient safety, it would be in coming up with high-quality reliable and reasonably up-to-date measures of patient safety that could be operationalized. Because if you have that, you can put incentives around it and you can hold people accountable for it. I think you can make a lot of progress. One potentially relatively easier place to start is around medication errors—this is something that David Bates obviously has done more work on than anybody else, probably in the world. Automated tools can help you identify and track medication errors. Those tools are not perfect, but they're pretty good and it's stunning to me that they're not widely deployed across every hospital in America. When you get to diagnostic errors, it gets really tough. Is this a place where we're going to have to use natural language processing with vast medical electronic health record data to identify diagnostic errors? I don't know. That to me is what the research agenda for patient safety has to focus on over the next 3 to 5 years.

RW: You showed a number of years ago that the uptake of EHRs in American hospitals and doctors' offices was woefully low. We now have a federal incentive program sprinkling around about $20 billion dollars to promote that. How do you think it's gone?

AJ: I think policymakers have gotten it pretty much right. And I laugh because there's clearly been a lot of criticism of this program. It's been called a boondoggle and a giveaway. The way I think about this is this is not just a federal government giving out incentives, this is Medicare giving out incentives. So in this context of Medicare as acting like a payer, the largest payer in the country, one problem is that the advocates of EHRs oversold it. They talked about how if we just can get electronic health records in we're going to improve quality, eliminate all adverse events, save us hundreds of billions of dollars a year, and make this a healthier and happier population. Well, little surprise that EHRs have not gotten us to the promised land. What we have seen in the data has been a pretty impressive uptick in terms of adoption. Just in the first year of the incentive program we saw that the hospital adoption number, which was 9% in 2008, by 2011 that had tripled to 27%. It's still a small number but that means in a couple of years about 20% of American hospitals adopted electronic health records. The hard part is turning that into real gains in quality, safety, and efficiency, and that's where CMS is going to have to play a bigger role with things like accountable care organizations or pay-for-performance programs. Those are going to be much easier for hospitals to execute on if they have an electronic health record in place.

RW: The other concern you hear about this is that the electronic health records just aren't all that good. Is that just because you think they haven't gone through all the new versions that they have to go through to get to be where they need to get to?

AJ: So this is a source of some conflict for me internally as well. I recently referred to today's EHRs as cutting edge 1995 technology. I actually believe that many of them are cutting edge 1995 technology. They're really pretty lousy from a user experience and, now I'm speaking very much as a physician. I've gone into hospitals, I've used this stuff—I work at the VA, the CPRS (Computerized Patient Record System) system we use is actually not bad, but it's the same one I was using as a resident at the San Francisco VA more than a decade ago. So the industry is not innovating fast enough. One downside of creating an incentive program that's very tightly packed over just a few years is that the entire industry, especially the leaders, have decided to focus all their attention on selling the products and very little attention on trying to make the systems better. Again, the question is, should we have waited until these systems got better? Given what we know about the safety risk of delivering health care using paper-based records, to me the tradeoff seems right. We needed to push organizations towards adopting electronic health records. I wish these systems were better. I wish that the market worked a little bit better. In the short run, it looks like we'll have a few companies that end up dominating. But I'm optimistic that even though these systems aren't great, they're going to get better and hopefully we'll have new competitors in the marketplace who will be able to come in and create new EHRs that are actually fun and easy and good to use.

RW: For someone who does research in this area and is a practicing physician, I wonder about the tension between scientific integrity and getting it right versus enthusiasm and boosterism for promoting the field. You've done a number of studies with great scientific integrity that have shown that things that we hoped would work didn't. That sometimes disappoints people and maybe even deflates some of the efforts. Do you worry about that?

AJ: I don't. The boosterism that I worry about is that when researchers decide that one solution is right for everyone. The goal of research is to learn, not to advocate for one solution or another. For example, if I really thought electronic health records were the latest and greatest thing and that they were going to solve all of our cost and quality problems, then it becomes very hard, as a researcher, to ever publish a negative study on EHRs. One conflict in research that all of us face is not becoming too wedded to any particular solution for improving quality, reducing harm, or reducing health care costs. I've tried very hard not to get wedded to the idea that EHRs were going to be our solution, especially because so much of the data that's coming out suggests that the way we're using EHRs is not generating the value we need.

As a practicing internist, what's really clear to me is that there are so many times we fail to deliver good care, despite my best intentions, despite the best of intentions of nurses, residents, and medical students. I'm fundamentally interested in how do we create a system that lets me deliver better care. If we do a study that says that the non-payment for preventable complications didn't work, that's really important because we need to know what doesn't work—so we can improve on it. We have to then go out and look for a new solution. The enthusiasm for the work comes from clinical practice and knowing that we have to make care better. Negative studies don't deflate my enthusiasm at all, and I hope they don't deflate the enthusiasm of others. It should make us work harder to find solutions.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources