Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation with…Janet Corrigan, PhD, MBA

April 1, 2010 
Save
Print

Editor's note: Janet M. Corrigan, PhD, MBA, is president and CEO of the National Quality Forum (NQF), a private, not-for-profit organization established in 1999 to develop and implement a national strategy for health care quality measurement and reporting. In 11 years, through its endorsement of measures that have fueled public reporting and pay-for-performance initiatives, and its compilation of NQF-endorsed "safe practices" and "serious reportable events," the NQF has become a key player in the safety and quality landscape. Prior to joining NQF, Dr. Corrigan was Senior Board Director at the Institute of Medicine, where she had a major hand in crafting the IOM's reports on quality and safety, including the seminal reports To Err Is Human and Crossing the Quality Chasm.

Dr. Robert Wachter, Editor, AHRQ WebM&M: Tell us what the National Quality Forum does.

Dr. Janet Corrigan: The National Quality Forum [NQF] was established in 1999 pursuant to the recommendations of the President's Advisory Commission on Consumer Protection and Quality. Essentially, under the National Technology Transfer and Advancement Act, we are recognized as a private sector standard-setting organization. We review performance measures, best practices, and serious reportable events, and we endorse those as national standards.

RW: Has the fact that you're developing standards fueled public reporting of those standards, or did we need NQF because those initiatives are out there?

JC: Well, it's probably more the former. The fact that we have standardized measures, practices, and serious reportable events, those are tools. They enable reporting efforts. Certainly, there were pioneering public reporting initiatives long before the National Quality Forum came into existence, and there was widespread recognition that we wanted to promote greater transparency back in the 1990s. Fortunately, there was the foresight to establish the National Quality Forum because it's easier if we all agree on standardized measures if they're going to be used in that process.

RW: How well has that agreement gone? You still hear calls for harmonization and that different organizations have their own ways of measuring the same thing.

JC: We've come a long way, actually. The current NQF portfolio includes more than 600 measures, and there will be more coming forward. But volume isn't the issue here. We're trying to get measures that assess important aspects of performance, and measures that are meaningful to consumers. Consumers and purchasers are a very important audience, but measures should also be useful for quality improvement. While we've come a long way in terms of moving toward commonly accepted standardized national measures, we still have challenges when it comes to harmonization. NQF expert panels are asked to identify opportunities to harmonize measures—for example, to make sure that all of the measures in the area of immunizations follow common conventions when it comes to specifying the numerators, the denominators, the exclusions. By harmonizing measure sets, it will make it much easier for clinicians to understand and use the information to improve patient care.

RW: And what are the carrots and sticks to get everybody to play together?

JC: Well, probably the biggest carrot is when the federal government uses standardized performance measures in its public reporting and payment programs. NQF-endorsed measures are the measures of first choice by the federal government and private purchasers. Medicare, of course, is a very large purchaser, a very large regulator. So they, in many ways, set the stage for standardization and public reporting.

RW: Is the organization agnostic about what happens to the measures? Obviously, you're very well aware of the context and whether measures are being used in no-pay-for-errors or pay-for-performance initiatives. Once a measure is developed and endorsed by the organization, is the organization's role in that measure essentially done?

JC: Well, I wouldn't say it's entirely done. Our primary purpose is to endorse measures that are useful for public reporting and quality improvement purposes. While we don't attempt to pass judgment on specific applications, we encourage everyone to use those tools wisely, given that no measures are perfect. They all have strengths and weaknesses. Our role does continue, though, once a measure goes out the door. We are now working on a more formal feedback loop from the front lines, because we want to know whether measures are performing as expected. As a part of being endorsed, we do require that the measures be field tested. But we also realize that the whole area of measurement and public reporting is moving at a very rapid pace, and it is important to have mechanisms for ongoing monitoring. We do review all measures at a minimum every 3 years, as a part of our maintenance process.

RW: I imagine that the never-ending process of coming up with new measures, while dealing with an ever-increasing set of existing measures that you need to re-review for unintended consequences and new science, must be daunting.

JC: Well, it is, and the other thing is that it's not only daunting, but it has made us realize that the whole area of measurement and public reporting needs more focus. That brings me to another major area of responsibility in the National Quality Forum. About 3 years ago, our Board of Directors expanded the mission of NQF. As I indicated, we were initially established to serve as a private sector standard-setting organization. And the board looked at the NQF portfolio (which at that time numbered about 150 measures or so, and we now have more than four times that amount), and they realized that the number of measures was increasing rapidly. At the same time, there are gaps in the portfolio. For example, a lot of our measures relate to the medical care process, but very few measures address care coordination or handoffs. We don't have as many outcome measures as we would like to assess the impact of health care on patient functioning, and very few measures of patient engagement in decision making. So we have a wealth of measures but at the same time, we also have many gaps in the portfolio. The board expanded our mission to include working in partnership with other groups to set national priorities and goals for performance improvement. That effort is now very much underway.

What we have essentially done with those priorities and goals is to identify "high-leverage" areas—by which we mean that if we focus our improvement activities on those areas, we will achieve very sizeable gains in terms of improved health and health care. The To Err Is Human and Crossing the Quality Chasm reports, I think, set out a direction, a mandate, and a call for very real change. But in all honesty, we haven't seen large improvements in quality or safety, and we haven't really achieved fundamental reform in the delivery system. The delivery system is still fragmented and decentralized. We lack critical supports like electronic health records and personal health records. By setting national priorities and goals, we are also trying to set performance expectations at such a level that meeting these expectations will require more fundamental reform of the delivery system.

RW: You've used the term "shared accountability." What does that mean to you?

JC: We have numerous areas of performance that no individual clinician or even an individual hospital can control. Care coordination is certainly the best example. Medication reconciliation is another good example, as is palliative care. These are all areas where multiple providers—physicians, nurses, pharmacists, health educators, others—contribute to patient outcomes, and the family caregivers are members of the care team as well. In the community, the patient may receive rehabilitative care or home health care. All of those providers and health care professionals contribute to the patient outcomes. That's an example where we need shared accountability.

Shared accountability is a term that came from some of the work of the Institute of Medicine, where I spent many years. We tried to think through how to encourage all the providers—that touch that patient, that influence the outcomes—to come together and to work to have smooth handoffs, to have good communication, to work within the shared treatment plan to achieve the best outcomes. That's where shared accountability comes in. It's often used in the context of pay-for-performance programs. And right now, we primarily have pay-for-performance programs that reward individual physicians or small practices or perhaps a hospital or a health care institution. In the future, we need to move toward shared rewards that reflect that multiple participants need to be held accountable and need to be rewarded for contributing to that patient's outcome.

RW: You mentioned that one of your agendas is to create such a robust and diverse set of measures that systems have to fundamentally reform themselves to meet the mandate. It strikes me that there must be some balance there of just the right number of measures to catalyze change without overwhelming organizations.

JC: Yes. There's a critical balance. And that's the other reason that the National Priorities Partnership Effort is underway. Because we realize that right now we probably are overwhelming the frontline delivery system and that we're probably not focusing their efforts on some of the most critical areas. The National Priorities Partnership has identified priorities and goals in six areas—population health, safety, care coordination, palliative and end-of-life care, patient and family engagement, and overuse. Eliminating health disparities is a cross-cutting goal that should be addressed in each of the six priority areas.

The goals in most of these areas are stretch goals—ones that are challenging to achieve, but have great potential to save lives, improve patient outcomes, and remove waste from the health care system. We think it's critical to limit that number so that adequate attention can be focused not only on measurement and reporting, but also on the most important thing, which is improvement. Our purchasers and regulators and board certification and recertification groups and accreditors all play critical roles. If they can align their activities around these six areas, then we think that we have a much better chance of achieving these very significant and difficult goals.

RW: You have a unique perspective on all of this having been at and helped lead the IOM reports on safety and quality. Now we're coming up on the decade anniversary of both reports. What were you expecting to accomplish and what's worked the way you thought it would? What's been different?

JC: Well, I'm really pleased about the impact of those two reports. To Err Is Human—in many ways the greatest contribution was that it put safety and quality on the national agenda. When the report came out, it had 3 days of saturation-level coverage in print and broadcast media. So for the first time safety did become, I think, an issue that the American public became aware of. Not the entire American public, obviously, but a sizeable portion of it and the media. And that is very, very important. I believe if you're going to achieve major change in a sector as big and complicated as health care, there has to be awareness on the part of the American public and those who represent the American public, i.e., the elected officials, that everything is not okay. So I'm pleased that To Err Is Human raised the red flag and said we have to do something here.

RW: Did that response surprise you?

JC: When it was released, we expected quite a bit of media coverage, but not anywhere to the degree that it received. Yes, it was a surprise. Although To Err Is Human certainly made a big "splash," Crossing the Quality Chasm made important contributions as well. It was one of the very first reports that tied the issue of payment to quality. For the first time, there was awareness that our payment policies have to be modified, even more than they have been already, to be much better aligned with achieving our quality and safety objectives. Now it's taken quite a few years and, frankly, the progress there has been too slow. I hope that we'll engage over the next couple of years in very serious discussions around more fundamental payment reform. What we're doing now with pay-for-performance does begin to tweak the payment mechanisms. But it doesn't go far enough. We're always hearing from the front lines that there are disincentives in our current payment programs that make it difficult to coordinate care for chronically ill patients and to provide the best care, achieve the best outcomes, do it safely, and do it affordably. I hope that we will see more action going forward around payment reform.

Another contribution of Crossing the Quality Chasm was a very important chapter about the need for stronger organizational supports to achieve higher levels of quality. It called for fundamental reform of the delivery system. That chapter probably received the least attention and I think it's an important one. It's not all about health information technology. That would be a big step forward, and it's critically important that we get electronic health records and personal health records in place and that there be connectivity. But it's also important to recognize that health care is so complicated that it's difficult to do it without good knowledge management, without the ability to assemble multidisciplinary teams, without specialized expertise in quality measurement, engineering, and other areas to carefully redesign care processes. All of that requires more sophisticated organizational supports. That's where we've made the least progress in the last 10 years.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources