COVID-19 Impact

You may see some delays in posting new content due to COVID-19. If you have any questions, please submit a message to PSNet Support.

PSNet: Patient Safety Network

 Perspectives on Safety

In Conversation with...Mark Chassin, MD, MPP, MPH

Editor's note: Mark R. Chassin, MD, MPP, MPH, is president of The Joint Commission, the preeminent standard setting and accrediting organization in health care in the United States and, increasingly, the world. Over the course of his notable career, Dr. Chassin, an emergency medicine physician, has held a variety of key positions, including New York State Health Commissioner and chair of the department of health policy at Mt. Sinai. He has published several seminal papers and was a member of the team that authored the IOM report, "To Err Is Human." We asked him to speak with us about his role at The Joint Commission, as well as future directions for the organization.

Dr. Robert Wachter, Editor, AHRQ WebM&M: What has surprised you the most about your position and about The Joint Commission?

Dr. Mark Chassin: The most surprising and gratifying thing that I discovered is that, despite the longevity of many of the senior staff, the appetite and enthusiasm for new ideas are really extraordinary. As I thought more about that in context, it became clear that this organization has really been in continuous change for at least 10 to 15 years, at an accelerating pace. To find that fertile ground for new initiatives and new ideas was surprising and gratifying.

RW: On one hand, there is tremendous pressure to fix the problems of medical errors and poor quality; on the other hand, there are often major gaps in what we really understand about how to make them better. How do you come down on that philosophical divide?

MC: I think we have to deal with two separate but related challenges. One is adverse events. The other is routine safety processes, such as hand hygiene, communication, and medication reconciliation, which frequently break down. In the safety process domain, we do have some models that could be used much more effectively than they have been. I'm talking about the tools that have been used in business with great success, such as Lean Six Sigma, the Toyota Production System, and change management methodologies. We are in the planning stages of an initiative that would bring these tools to bear much more effectively in health care organizations to solve the safety and quality problems we are all struggling with. With respect to adverse events, I think the story is different.

RW: In which way?

MC: The problem is that we need to generate new knowledge. Adverse events are different from routine process breakdowns because they represent unique sequences of errors that will never happen again the same way. We have imperfect tools to assess exactly what happened in the course of an adverse event. If you consider Reason's Swiss cheese model and think about all the defenses that have to be breached before harm can be done, we have imperfect ways to figure out exactly which defenses failed in which way. Then, once we understand where the weak defenses are, we have imperfect analytic tools to tell us which one to fix first. Which one is going to be lethal tomorrow? Which one can we safely put on the back burner while the other high-priority project is underway? We lack a systematic way to learn across adverse events, to build knowledge about institutional vulnerability that benefits from seeing how the defenses fail in more than one adverse event. Our current tools do not encourage us to do that. They encourage us to do what we call root cause analysis, which I think is a misnomer since there is never a single cause. There aren't usually even two or three. Five, eight, or ten defenses fail in many of these adverse events, especially the more complicated ones. But knowing what happened in one is not enough information to understand what the vulnerabilities are in an institution. The analytic methods we have are from a different generation. We need to develop the next generation of those tools to achieve the goal of high reliability.

RW: As a philosophy for you and for The Joint Commission, one way of proceeding would be to focus mostly on measuring and reporting the outcomes and then assuming the organizations will figure out how to improve. Another way, which has been more traditional for The Joint Commission, is to focus on the structures and the processes. What is your philosophy on that?

MC: It will inevitably be a blend. Either process or outcome measures can be valid measures of quality—valid in the sense that they really reflect the nature of quality. I go back to the 1999 Institute of Medicine definition (which is also The Joint Commission definition): Quality is the extent to which health services increase the likelihood of desired health outcomes. So if you're going to use an outcome as a valid measure of quality, you must have a proven relationship to processes that you can change to affect the outcome. It might be useful to know about that outcome for other reasons, but if you don't know how to improve it, then it is not really a valid measure of quality.

The reverse is true about process. You must have a proven relationship of a process to an outcome in order for that process to be a valid measure of quality. I don't think we should be taking up the scarce resources of health care organizations by focusing them on anything other than valid measures of quality. The Joint Commission has been one of the most powerful forces that has focused health care organizations—particularly hospitals—on solving critical safety and quality problems through National Patient Safety Goals, core measures, and accreditation standards. These programs spotlight for organizations exactly where they should deploy their very scarce quality improvement resources to make themselves better. We, and others with the same focusing power, have an obligation to make sure that we have the highest confidence that health outcomes will improve directly as a result of that spotlighted activity. If we do not have that high confidence, then we have to seriously question why we are asking organizations to expend effort on activities that may be off the mark.

RW: Do you have any sympathy for the hospital that says, "We just can't afford to continue to do this," especially as the bar gets raised?

MC: I have tremendous sympathy for all organizations with regard to the resources available to do quality improvement. Whether you are small, medium, or large, you have scarce resources to devote to quality improvement. I am making the argument to health care organizations that they need to find more ways to devote more resources to improvement. Ultimately, that is what it is going to take if we are going to meet the expectations of the public, and our own expectations to transform health care into a high-reliability industry with rates of adverse events and rates of breakdowns in safety processes that are comparable to other industries that have achieved high reliability, such as commercial air travel and nuclear power.

The other point I made is even more important to hammer home. To make the argument credibly that these resources are producing better quality and safer care, we must spotlight programs that really produce results that are measurable, documentable, and sustainable. That is another component to the current state of the environment that I think is problematic. There is no question that the industry has put enormous effort into improvement, but when public stakeholders say that adverse events keep happening that should not happen and safety processes keep breaking down, what do you have to show for all that effort? We do have some things to show for it. If you look at The Joint Commission's publicly reported core measures over the past 5 to 8 years, there has been steady and substantial improvement. But that is a small slice of the action for what our public stakeholders rightly expect us to be able to document. We need more resources for improvement. We need more proven interventions. We need to be able to document with good measures that improvement is producing real results in terms of health outcomes.

RW: One of the things The Joint Commission has not done, at least to my knowledge, is accredit around information technology (IT). Do you see a future in which The Joint Commission might mandate technology such as CPOE [computerized provider order entry], barcoding, and smart pumps?

MC: There are standards around information management in the manual. In general, the approach on the accreditation standard side is to say, these are the processes you need to get working right. How you choose to do that—you have a lot of leeway. As you know, we accredit lots of different places with lots of different access to capital and investment for bells and whistles. I do not see that fundamental approach changing much.

I will make two other points about IT. One is that The Joint Commission has taken a strong interest in pushing the IT agenda. We have been pushing outside the routine accreditation and certification processes for the more rapid development of IT and its application in health care. On the other hand, I do not view IT in health care as a panacea. One has to be very careful about how you employ it, especially if you have not really worked out the defects and pitfalls in the process you are trying to automate. We have seen this happen with CPOE implementation and other kinds of IT implementation. If you do not get the process working right before you automate it, bad things can happen to patients very quickly. CPOE is a perfect example of that. The aphorism that I find continually very scary about IT is that computers do not make us less stupid; they make us stupid faster.

RW: One of the things I hear from hospitals is that they understand they are going to be measured, accredited, and regulated. But they wonder whether these organizations can clean it up so that it is coming in a single voice with a single set of measures, rather than something coming from The Joint Commission, something else from CMS [Centers for Medicare & Medicaid Services], or something else from NQF [National Quality Forum ]. How do you respond to that?

MC: It is a terrible problem and The Joint Commission has been working with other national organizations and alliances—the Hospital Quality Alliance, the Ambulatory Quality Alliance, the Quality Alliance Steering Committee—and lots of other organizations, including NQF, to try to wrestle this problem to the ground. The term of art in this arena is called harmonization. The measures should be identical when they are measuring the same thing. One of the problems is that all of the organizations have to agree on exactly the same way to manage whatever the problem is. For the first 10 years or so, there was really good agreement between the two organizations that matter the most to hospitals—Medicare and The Joint Commission. So far that has worked really well. But if last year's proposals from CMS, particularly outpatient and new inpatient measures, are harbingers of the future, the threat is that the agreement will be disrupted. Medicare is considering adding a large number of measures that are derived from their claims databases—measures that are not, in my view, as highly valid from a quality standpoint as most of the clinically based core measures. But those require a lot of effort on the part of hospitals to collect from medical record data. Medicare is under pressure from other stakeholders to get more measures out there and to more quickly cover more territory. It is getting more and more difficult to achieve a productive consensus.

RW: I want to ask you about some of the more controversial new directions, which are medical staff standards and the assessment of individual physician competence for credentialing and re-credentialing. Do you feel like the science and the data sources are up to the task?

MC: This is also an inevitable direction for quality measurement. The physician quality reporting initiative is out there. They have developed hundreds of measures. There is a movement now, largely in the private sector, to develop physician practice and individual physician measures on a community-wide basis. Whether it is good, bad, or indifferent in terms of measurement is in the eye of the beholder. I have already told you my bias that claims data do not produce highly valid quality measures. But they are already out there on individual physicians as well as on organizations. The challenge for hospitals is to develop ways to credential physicians that meet not just The Joint Commission standards but the common sense standards of physicians. Hospitals must have confidence that physicians are qualified to do the procedures and offer the services they have privileges to do. We certainly see examples—fortunately they are uncommon—of physicians undertaking procedures that they are not qualified to do. This is an ongoing challenge. When I was Commissioner of Health in New York, I created the first program publishing data on risk-adjusted mortality for hospitals and surgeons following coronary bypass surgery. That program has grown and flourished, and it was a long battle to get the right mix of clinically valid risk adjustment data with the right time series; physician data are published on a rolling 3-year time period to make sure that adequate volumes are represented. This is tricky business, but it is clearly a direction that cannot be reversed.

RW: One thing that strikes me about the position of accreditor or regulator is that they may not get a lot of feedback from those they work with because people are scared to give them feedback. Do you think that's real and, if so, do you have any strategies for mitigating that?

MC: The Joint Commission is not a regulator in the governmental sense of having the authority to either say you can or cannot do business here. When I was Commissioner of Health in New York, I did have that authority. This is very different. This is a private organization that relies on organizations voluntarily subscribing to our service; they pay us to assess and educate to help them improve. That is a very different posture than a pure governmental regulator. That said, there is no shortage of feedback. The Joint Commission accredits more than 4500 hospitals, but the total number of organizations and programs that we accredit or certify topped out last year at about 16,000 including long-term care, behavioral health, laboratory, home health, ambulatory care, and disease-specific programs. We are primarily known for the hospital program because it covers so much of the marketplace. We continuously reach out to all of our customers and other stakeholders to understand how we can do our job better. I see that process only intensifying as we create a more effective internal improvement process.

RW: My last question is: are you having fun?

MC: I am having a ball. This is a great organization. The people are some of the best I have ever worked with in terms of how enthusiastic and mission driven they are. If you think back just 10 years ago, when hospitals were not measuring anything on any sort of national or consistent basis, the revolution that has taken place is extraordinary. What we hear in the field is no longer, "There are problems, but they're not in my place." Organizations know they have quality problems. What they are asking us, with increasing decibel levels, is: "Don't keep telling us what to fix—tell us how to fix it." So it is a great time to be here.