In Conversation With… Andrew Gettinger, MD
Editor's note: Dr. Gettinger is the Chief Medical Information Officer and the Executive Director of the Office of Clinical Quality and Safety for the Office of the National Coordinator (ONC). He led the development of an electronic health record (EHR) system at Dartmouth and subsequently was the senior physician leader during their transition to a vendor-based EHR. We spoke with him about safety and health information technology.
Dr. Robert M. Wachter: When you started using health information technology (IT) as a clinically practicing intensivist and an anesthesiologist, what did you see as the safety benefits and what did you see as the hazards?
Dr. Andrew Gettinger: As a benefit, I saw the accessibility of information about our patients. I started practice when we had paper lab slips. We would go down to radiology to put up the films with the radiologist; we were unable to do anything with the patient's record unless you were physically present with the paper chart; we had delays in dictated transcribed notes; and we had challenges deciphering individual consultants' handwriting to know what they were thinking and recommending. So, the transition to typed notes, lab results, radiology reports, autopsy reports, and gross pathology reports was a substantial improvement.
But each of them also had some cons. The first was we didn't have a centrally organized way of interacting with all the information around a patient in the early [electronic] systems, which lacked graphical user interfaces. They were more terminal based. You'd basically go in and say what you wanted to do, such as look at radiology reports. And you'd have to identify which patient. Then if you wanted to look at that same patient's lab, you'd have to leave that with a bunch of keystrokes and go into the lab part of the system to look up results. No graphing, no trending, no alerting. All of those were very rudimentary. It represented substantial progress when electronic health records began to be organized around the patient first, with demographics, followed by an organized way to move among different content areas in the record.
RW: What's the business failure that prevents digital health care from being as high quality or as safe as we've seen with digitization in other industries?
AG: Medical equipment manufacturers are worried about product liability. So they would rather err on the side of alerting. The EHR has a similar issue. We don't have a methodology to avoid alerting clinicians with the kind of list of possible side effects that appear in the PDR [Physicians' Desk Reference]. Basically, it's an encyclopedia of side effects, some of which may have been reported in one or two patients, but all of which get loaded into the medication side effect dictionaries because the threshold is "this could happen." Most clinicians really want to be focused on severe reactions, "never" combinations, such as "never give meperidine [a powerful pain medicine also known as Demerol] to a patient who's on an MAO inhibitor." When you ask people why they don't just turn off those "this could happen" alerts, there's a concern about liability.
RW: As you think about the safety challenges, what are the other main categories that worry you the most?
AG: I worry about three areas in health IT and safety. I worry that the software has lots of room to improve. The key is user-centered design: making sure that the coding of the software makes sense, that the screens are not overly busy, that they don't tax the user, that colors are used in a way that doesn't preclude colorblind people from seeing the appropriate cues, and that the navigation is designed in a way to decrease the likelihood or frequency of doing something on the wrong patient. So that happens in the design process.
The second big bucket is in the implementation. We've learned that even really well-designed software can be made very challenging for clinicians when the implementation isn't done well. As an institution begins the process of implementing an EHR, the challenge is that any one institution or office doesn't do this every day. Therefore, they don't have a lot of knowledge regarding what the important issues are. We wish everybody would look at our SAFER guides, which we commissioned and worked with nationally known safety experts to develop. But not a lot of folks actually do that. Then the people doing the implementation are frequently not clinicians. Because it's so costly to pull people from practice, or there aren't enough clinicians to meet the clinical demands. So the decisions made by the implementation team may not be the best decisions.
My final concern is that clinicians have to learn how to use the EHR. It is not the same as the paper record. You actually have to configure it to make it efficient, make it save time, understand how it works. The analogy I use for this is getting on a plane where the pilot hasn't been through the simulator. If I were to go back to our large-scale deployment, I would have invested far more time in developing EHR simulations, getting our software configured, then making sure our clinicians played with the systems more before they started using them to care for patients. Despite the recent publication coming out of the Harvard hospitals that suggests there is not increased patient harm around the deployment of EHRs, most of us who have been at that ground level think that there probably is underreporting—that those transitions actually do cause substantial negative outcomes. A year ago, we commissioned a study with The Joint Commission to look at their sentinel events to see whether or not health IT contributed. They identified some areas where that was the case, but they particularly suggested that it's risky during these transitions. It's a reminder to clinicians that you have to be very focused and thoughtful when you're in the midst of implementation. I'm now driving a new car, and I'm not used to where all the controls are. If something comes up and it doesn't make sense, just because the computer presented it to you doesn't necessarily mean it's correct.
RW: Let's turn to your role in the ONC and maybe go back to the beginning of the ONC, which started in 2004. I'll lay out three critiques for you to address. They partly have to do with safety. One is this big bolus of money got us to buy and install software that wasn't very good and is not very safe. The second is that interoperability should have been baked in from the start. The third is meaningful use got the government too deeply into the weeds of regulating IT and stood in the way of innovation. How you react to those three critiques?
AG: If you look at the curve of EHR adoption, it's very clear that the bolus of federal funding really changed the inflection point and pushed adoption faster than it would have happened otherwise. This was part of the American Recovery and Reinvestment Act, the stimulus program. It was an opportunity to take advantage of what was and is unlikely to occur again—the federal government putting resources to enable people to make this transition. It didn't 100% defray all the costs. It certainly doesn't cover the long-term ongoing costs. But had the former leadership of ONC not stepped in and taken the opportunity, we would have been approaching this much more slowly. That may have been good. It's easy in retrospect to be critical, it was too much money, it was too fast, it was too x, it was too y.
We currently have between 5 and 600 ambulatory software vendors. Many are not producing the necessary quality of software. The certification program is almost like board certification. It's a minimum standard. It's a low bar. In retrospect, it's easier to say we would have liked to have had a bit more. Interoperability is clearly one of those things. But even 12 years later, we still have some fundamental problems with interoperability. We don't have a national patient identifier, which was part of the original HIPAA legislation.
Google, Amazon, credit bureaus, they all know more about us at times than our health care system does, because we are not able to share information back and forth reliably about patients. If we had mandated interoperability without better identification, we wouldn't have succeeded. We still won't succeed until we do a better job. That area is a big safety issue. I don't call it a national patient identifier. I call it a patient safety identifier. I think that's the right way to frame this. It's essential if we're going to be aggregating data.
RW: I'm advising the UK on their digital strategy and they have an NHS number. Everybody has one, and it's an amazingly important asset.
AG: Many countries have done that. The folks who have concerns about privacy are looking at, absent that identifier, we share way more information to get not as good a match as we could have with the identifier. At least that's the hypothesis. The first step is to test it and explore whether that's the case. A small minority doesn't think it'll get better with an identifier. But the private sector is looking at this pretty carefully now. CHIME has a very exciting HeroX million-dollar competition that hopefully will produce some best practice suggestions that can be adopted without government mandates.
RW: And critiques about meaningful use?
AG: Historically, the prior ONC leadership asked folks who had EHRs to start using them. The measures are pretty simple and straightforward. Under the guidance of our two FACAs [Federal Advisory Committee Act], over time they got more complicated. Then what we've missed is the opportunity for specialized areas of clinical practice led by specialty societies to define what's important for the practice of whatever their specialty or subspecialty is. I'm hoping that over time we'll go in that direction more. The meaningful use program has a lot of administrative overhead. A bunch of very well-meaning efforts, but in practice they've been too burdensome for practicing clinicians and institutions.
My vision is that whether it's the federal government as a third-party payer or another third-party payer, our patients will be able to evaluate quality care by methods that don't require doctors and other clinicians to document explicitly to demonstrate that quality. I believe that routine work should be articulated by evidence-based measures owned by the specialty societies or other disciplines as a first step. Nursing will be of increasing importance, and they are the largest health care workforce we have, especially with an aging society that will require more coordination, more home care, and more help as we continue to age out.
RW: You mentioned government regulations and meaningful use were well meaning. That was a theme of my research in thinking about this for a year: the unanticipated consequences of various parties doing what they thought was right. Everything from the ONC to a software developer to a hospital, you're doing what you think is the right thing to do, but until you step into the shoes of the party or the organization dealing with what you produced, it's very hard to get something this complicated right. One of the key lessons is to be more of a learning system all the way through.
AG: Absolutely. I routinely interact with not just the leadership of ONC but all the various staff members. It's a privilege to work with an incredible group of dedicated government servants. Everybody is trying to do what's best for patients. We all have different backgrounds. It's a very interesting conversation when we're sitting in a room where there are clinicians, public health people, and lawyers. And we have to translate the clinical perspective into the legal perspective to make it a regulation.
RW: Let's turn to this safety enterprise. How much of that was informed by the prior experience with other initiatives in the ONC, whether it's meaningful use or others? Was there learning that if we centralize this too much, unanticipated consequences are going to impede our goals?
AG: Not at all. I'm very proud of this. We created a group of invested stakeholders in health IT safety. And it ranged from the American Medical Association (AMA) and the American Hospital Association (AHA) to the Electronic Health Record Association (EHRA) to individual researchers, to insurers, professional liability insurers, and developers. Of course there was a large smattering of federal folks: FDA [Food and Drug Administration], AHRQ [Agency for Healthcare Research and Quality], ONC, CMS [Centers for Medicare and Medicaid Services], and FCC [Federal Communications Commission] were there. At the outset, I wondered if this group would ever agree on the best way to handle this. Over a series of conference calls with software that facilitated consensus and sharing, we actually reached a good place. The notion of the health IT safety collaborative really came from the broad community with some guidance from the federal government and some guidance from the Institute of Medicine (now the National Academy of Medicine).
You can remember the old structure for morbidity and mortality conferences where it was not uncommon to castigate, accuse, blame, and criticize. That has evolved to looking at safety not as an individual issue but as a systems issue. Whether it is root cause analysis after something hasn't turned out well or efforts to be more transparent in the hospital and to recognize that lots of things are going wrong in hospitals and patient care, it's a very dangerous, very complex environment. We need to figure out how to make sure that James Reason's Swiss cheese holes don't all align and allow errors and patient harms to occur. I'm very enthusiastic about the outcome.
It's a voluntary, public–private partnership. It is not regulatory, and we hope to have sufficient privacy protection that individual institutions, including developers and implementers, will feel comfortable sharing their problems. The model for this is called the ASIAS. It's in the airline industry; the FAA interacts with it. It's about 15–20 years old and has an advantage in that it gets a lot of automated signals from airlines, as jets take off and land. They also get narrative reports about near misses or "This airport has this runway with these lights and every time I come down I'm confused, and sometimes I end up on the wrong runway." Over time, if you look at the results of the FAA's work, the work of their contractor in this space, and the work of the aviation industry (since the commercial airlines fund this), airline safety has been improved phenomenally over time. I saw some of the data. I was amazed. That's what we want to replicate.
RW: Let me play devil's advocate. I believe everything you just said, but I'm sure some folks out there might say there's an EHR system with an unusable and confusing screen and a bunch of errors are happening now in 10 different places around the country. None of them know it's happening. They're not required to report them. The vendor may even have something in their contract with the health system making it difficult-to-impossible to disseminate that information. So, the argument is that some central regulatory organization should mandate the collection of that information and have the tools necessary to force that vendor to make their system safer. What's the counterargument to that?
AG: It is the argument against regulation in general. A lot of times, regulation that starts out in a very well-meaning way ends up not really achieving its goal. One aspect of what you talked about is what I'm going to call gag clauses. I think almost everybody who looks at this would like there to be a way that screenshots can be shared, so things can be improved. A lot of that could occur within the collaborative. Most EHR developers do not want to have marginal software; they want things to work. So clearly, if given the opportunity and content that is problematic, we believe that most of them will want to fix it. Now that leaves out what I'm going to call a bad actor who doesn't want to fix it, doesn't care, but I think those are few and far between. I don't think that's the majority of what's happening in the industry today. Also, so much comes down to the implementation. We have not studied that enough. We haven't been more rigorous about standardization.
The federal government, if Congress approves, has the ability to regulate. But the federal government also has the ability to incent and therefore move the market and change the dynamic. Perhaps the meaningful use program did some of that, and we can quibble about whether or not there were opportunities to have done it better.
But I'm very excited about what we don't know yet. And we don't know yet how this pivot away from transactional fee-for-service care [to value-based care] is going to change health IT, because the health IT that was designed for a fee-for-service system is not the kind of health IT that we're going to need for a quality- and outcomes-based system. Patients get their care from multiple places. How are we going to assess what element of that care is quality care? A lot of the work published so far is from systems where most of the care is internal in the system, whether it's Kaiser, the VA, the Department of Defense, or entities like Cleveland Clinic, Geisinger, Dartmouth–Hitchcock, or Intermountain. When you start to look at how care is received, lots of different hospitals don't necessarily share today very effectively or very easily. How is the new wave of payment reform going to affect that? I think it's going to be substantial.
RW: Where do things stand with the ONC's safety enterprise? You talked about its construction and conception. Is it up and running? Is it a thing today?
AG: No, and I've been very public about this. We are doing small tests of change. We are just finishing up a mini-collaborative on pick list errors, and we're going to be coming forward with some recommendations. We're watching very carefully a parallel collaborative, and what I perceive as very successful, that ECRI is leading (completely independent of the government collaborative). Their first topic was to look at the problem of copy and paste. Now they're looking at patient identity and patient matching.
In order for the safety collaborative to get off the ground, we need two things. We need the Congress to fund it. And we need that additional authorization to give protection to developers and implementers beyond what we have today. There are members in Congress, people in the community, and health IT stakeholders who are very much aligned with this. Sometimes good ideas take a while to germinate, especially when there is a substantial amount of change and other priorities going on. But from my time on the Hill, I know that the staff there and the principals are really interested in doing the right thing as well. I'm hopeful that it will be a reality in the not too distant future.