Sorry, you need to enable JavaScript to visit this website.
Skip to main content
Photograph of Dr. Shantanu Agrawal

In Conversation With… Shantanu Agrawal, MD, MPhil

September 1, 2019 

Editor's note: Dr. Agrawal is president and CEO of the National Quality Forum (NQF). He is the former deputy administrator for the Centers for Medicare and Medicaid Services (CMS). We spoke with him about the National Quality Forum, including its role in quality measurement, patient safety, and improvement.

Dr. Robert Wachter: Why does the world need the National Quality Forum (NQF) or an organization like it?

Dr. Shantanu Agrawal: Throughout our history, we've been centrally involved in helping to create a lot of the current national approach to how we think about quality measurement. Some of the initial work we did—even helping to define the patient safety space—has been critical. Helping to invent the language around things like never events as an early accomplishment in our history also are singularly important. Clearly, a lot more work needs to be done in quality. 

One of the fundamental wins for NQF has been showing that we can in fact take a measurement approach to quality. That quality can be measured and produced, and you can achieve agreement across many different stakeholders in health care about what good outcomes and good quality look like. At the time of our founding, we were as a community becoming much more aware of the issues in quality, and particularly in patient safety. But there wasn't a common lexicon, and I don't think there was a sense that agreements between all of the stakeholders in health care would fundamentally be possible. If anything, we've shown that it is, and we continue to show that every day. Our work in bringing people together, the diversity of stakeholders, arriving at a common view of good, putting that view into practice as a set of measures—and more recently as a set of quality improvement guidance and tools—that has been a critical role for NQF and one that we will continue to play.

RW: Talk a little bit about where NQF lives in the ecosystem and where your funding comes from.

SA: I always take great pains to remind people we are a private nonprofit. We do have important government relationships that allow us to have influence beyond our size that does come out of our history. A very important facet of NQF, as far as our place in the ecosystem, is that we are not aligned to any industry or part of health care. We're not a payer organization. We're not a provider organization, a purchaser, a consumer, or anything like that. We sit at the center of all of those stakeholders, and it's important in all of our work for those stakeholders to be represented. That makes us very special in the ecosystem because everybody knows that they have a place at the NQF table. The diversity of viewpoints that are represented, the inherent tensions that come with that, the wrestling and the back and forth, is central to how we arrive at answers that are ultimately acceptable across the ecosystem. 

RW: Did you need something embedded in statute that said that when NQF comes up with an acceptable measure it will be the one that, for example, CMS uses in a pay—performance program?

SA: Very early on in our history, Congress wrote a role into statute for a "consensus-based entity." It does not refer to us specifically or directly, but since that statute we have been the consensus-based entity. That does carry certain requirements about being multistakeholder, what that means, and how we must conduct ourselves to meet the definition. Obviously, we've worked hard to make sure that we do. That statutory foundation has provided funding to the consensus-based entity through CMS. A lot of our funding continues to come from the government. That helps us focus on nationally important priorities, and keeps us on the cutting edge and involved in what matters most. But we're also a membership organization. Another large chunk of our funding comes from membership dues, and that's a great way to make sure that we are also highly tied to the private sector and aware of its needs. We have 400-plus members. They help populate our committees. They're a great source of intelligence about what's going on in quality and the greatest needs. Both the public and private sectors interplay at NQF and help guide all of our work.

RW: Other entities are measuring quality and safety to some extent, and people pay some attention to them. How does the interaction between NQF and Leapfrog, or NQF and The Joint Commission, work?

SA: Because of our relative uniqueness as a multistakeholder, non–industry-aligned organization, people view us as a trusted voice for change and improvement. They view us as a place where good science, expertise, and basic goals around safety and quality improvement are the primary drivers. Over the years, we've created programs like endorsement, like measure selection for CMS. In part, those programs are successful because of how much people trust our position. Organizations like The Joint Commission, like NCQA [National Committee for Quality Assurance], and other multistakeholder organizations will interact with us. They'll utilize our programs to make sure that they're designing the best possible measures that are the most validated, the most scientifically rigorous, as well as the most supported by a broad consensus of stakeholders. That's an important element of our work. A lot of people use our processes or they will use NQF-endorsed measures as their gold standard measures. I've talked to a number of payers as they're getting into value-based arrangements and quite a number of them say they inherently start with the NQF-endorsed measures to see if a measure is available that meets their needs. Because they know if there is, it will be scientifically rigorous and tested in front of a multistakeholder committee. And those are really important goals for them. So we have a number of users both upstream and downstream of our programs and processes. 

RW: How does a measure get generated by you as an organization?

SA: Measures come from specialty societies, payers, the government; CMS is a large producer of measures as well. If they are looking for endorsement, then they get subjected to a highly rigorous review and endorsement process. We don't develop measures on our own because of the potential conflict with endorsement. But we have a number of different programs that try to identify where there are critical measure gaps or where measurement needs to go overall. For example, there aren't enough maternal mortality or maternal health measures today. It can also be something very broad. Recently, we have started looking at social risk adjustment as an area that measurement needs to take on, and do the necessary work to figure out what the best data sources are and what the impact of that kind of risk adjustment might be. We try to take a holistic view of measurement and improvement and identify both specific areas as well as broad functional areas where some gap filling or more expertise is needed. For some of them, we can drive knowledge generation directly, like in the area of social risk. For others, we rely on other organizations to bring developed measures for that review process. 

RW: When you say drive it directly, do you ever serve as a funder or you're driving it by creating a consensus that stuff needs to be done and then other people fund it?

SA: At the end of the day, we're still a small nonprofit based in Washington, DC. We don't do a lot of direct funding ourselves. We rely on other partners. For example, when it comes to the area of social risk, this is a place where CMS and our members have stepped forward to support our work. Based on the partnerships that we develop and the members that come to NQF, we are able to take on work that is multifaceted that drives a lot of different areas at once, and we help to contribute both knowledge as well as tools and resources that others can use.

RW: Let's turn to the patient safety side. The beginning of the Never Events list was AHRQ commissioning a report called Making Health Care Safer that I led. I remember those early days of trying to figure out if you were going to focus on certain safety targets, what would they be, and it was very exciting because it was really a blank slate. In the safety world, NQF's involvement probably got on the map with the Never Events list. I remember Ken Kizer announcing these eight things that should never happen in health care and they all seemed pretty clear cut—like a scalpel left behind in your belly. There are proponents of "never events" as a title, and other people that think either it's not quite accurate or it's a little hyperbolic. What do you think about the concept of never events in terms of nomenclature?

SA: The concept is important. I think that was a paradigm shift, less because of the vocabulary, more because we were talking about safety as something that the entire ecosystem had to address directly. The notion that some things should not happen, that we should have a high degree of accuracy around, was a paradigm shift. Again, you can put whatever terminology around it, but many other industries have their own ideas of what should absolutely occur 100% of the time or 0% of the time. Bringing that thinking to health care and saying wrong-site surgery or foreign bodies being left in a patient is simply not okay. We have to be extremely skeptical of that event occurring even once. That we have to learn from that kind of error, from that kind of safety issue, was really critical. This cannot just be about minimizing certain bad outcomes or trying to maximize certain good outcomes. We have to set the standard for what we think good looks like, and why not aim for perfection where that's appropriate. 

RW: That makes sense. As the list has grown over the years, the nomenclature becomes even a little bit more problematic because they're not as unambiguously neverish as the original list was. (Although aspirationally you can make the same argument there should never be a fall, there should never be a decubitus ulcer.) How do you see the evolution of the list over time, and do you see the list growing in both number and in its uses?

SA: The evolution has been important. We started from days when perhaps it was a bit more binary. "This should always happen"; "this should never happen." We were able to innovate this concept of never events. As quality has matured, we've gotten into true performance measurement. A lot of things matter in health care, from clinical outcomes to the experience of care that patients have. Very recently, we started talking about health equity and looking at how outcomes vary across different patient populations, different social populations. As we get into more complicated spaces, a binary language around "never" or "always" is kind of inadequate. We have to be talking about the performance of the system overall, deciding what measures make the most sense and then trying to optimize those measures as best as possible, realizing we are not going to achieve perfection. Probably all of us have to get comfortable with the notion the perfection is not the goal explicitly most times. But improvement is still possible, and we should be aiming for improvement. That's where the right measures and the right benchmarking become useful. 

RW: When NQF started, probably 5% of hospitals had electronic health records, and now 5% do not. It's close to the same percentage as in ambulatory practice. How does the digitization of medicine change the work of NQF in both creating measures and operationalizing them?

SA: The quality community is still grappling with this movement around data. The approach to measurement has still been fragmented across different data sources, whether it's claims data or electronic health medical records or registries, which have come into the foreground. We're still struggling with it because it's not clear what the best data source is for the kind of measure that you're interested in. The data sources don't speak to each other very well. It's hard to track a single patient across claims, across EHRs, across registries. We are a use case for interoperability, and we have to get to a future situation where all these data sources either have specific uses, intended uses, or they're far more interconnected, so that we liberate the space of measure development but also make the data collection around measurement much more facile. A big driver of frustration with the whole quality paradigm today is that it's really hard to get the data out of all of these systems and give you a coherent picture as a clinician about what is going on and where your improvement potential lies. 

RW: I hear multiple sources of concern. One is the measurement burden at the organizational level. Then, there's the experience of measurement for frontline clinicians, where some of the time the measures don't feel like they're truly capturing the quality of care. What do you say to both groups and do you think the future is going to be better in both of those domains?

SA: Definitely. We take the burden issue very seriously. That is driven by a lot of different factors. You've identified a number of them. From the volume of measures, how those measures relate to actual clinical workflows and the care that's being provided, all the way to data collection and reporting issues and even the use of measures in, say, public transparency programs. We are open to reducing the volume of measures. Endorsement can be a great leverage point for that, so that we are putting into the ecosystem only the best measures that have been highly vetted and highly tested. I have some concern that there is just a proliferation of measures occurring without a lot of rigor put behind those measures. Of course, physicians and other clinicians are on the leading edge of all of that proliferation. They experience it right away. 

I think we can do a better job at helping the ecosystem identify the best measures. We can also get into a space where we really start dealing with systems, with sets and programs of measures instead of individual measures. That is conceptually a place where NQF needs to go so that we help to rationalize systems of measurement and the number of measures that are in use, and try to bring more harmony across the public and private sectors. 

RW: You were talking about the burden that sometimes individual doctors feel. Do you feel like there's movement on that?

SA: Yes. Part of the complexity of the whole burden discussion is that many factors could potentially drive burden. Where it's coming from, and to some degree the whole quality enterprise and ecosystem is a victim of issues that are endemic to health care, like the lack of interoperability. Why do we have so many nurses walking around on patients' floors doing chart abstractions in order to report quality data? That was not an inherent issue created by the quality enterprise. It's really by the state of affairs with health care data and lack of flexibility and usability of health care data. So I am looking for solutions in that space to mobilize health care data and use it as efficiently and as effectively as it could be used. 

Being concerned about burden on physicians is absolutely important. In clinical practice, I want to spend all of my time actually being with the patient and practicing, and not doing a bunch of administrative things. At the same time, I don't want the conversation about burden to undercut the very foundations of why we do quality work in the first place. There's a quality measurement and improvement paradigm because there continue to be fundamental issues in quality. The To Err Is Human report identified deaths occurring in the American health care system because of quality and safety issues. We have to remind ourselves why the whole quality enterprise exists. We have to remind ourselves that real improvement has been made over the last 20 years or so, that there's a continued need for improvement, and not to let the burden conversation overwhelm the centrality of quality and safety. 

RW: You mentioned earlier that the organization is moving a bit from just the measurement part of the equation to also supporting improvement. What does that look like?

SA: That has definitely been a focus area of mine. I see from my own clinical practice, where there is still sometimes a major divide between what we know to be good and what we're able to achieve. There are more than 5000 hospitals and health systems around the country and many, many more physician practices. There's an unequal distribution of knowledge and expertise. An organization like NQF can help address that situation. We can take our knowledge of measurement, our expertise, and apply it to the science of improvement. We can take our core strengths—whether it's looking at the evidence, convening the experts in the room, and looking at case studies and anecdotes—and we can distill out what some best practices are that can be practical for health systems and clinicians to help guide the improvement that we want to see. 

I fundamentally believe our mission is about quality improvement. It is not merely about measurement. The more that we can do to connect our measurement side to the front lines, to the pragmatism of quality improvement, the better off we are as far as our relevance. We've done some work in the last couple of years taking steps in this direction. We did work this year on opioid stewardship, and we released a playbook on how delivery systems can implement opioid stewardship approaches. We've done similar work on shared decision-making and on antibiotic stewardship. We have projects right now, two of them in social determinants of health, one in serious mental illness, another on serious complex illness generally. These are all places where there is robust measurement, there clearly needs to be improvement, and where we can provide practical guidance so delivery systems know how to improve—not just what good looks like.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Related Resources