Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation with…Thomas J. Nasca, MD

February 1, 2010 

Editor's note: Thomas J. Nasca, MD, is the executive director and chief executive officer of the Accreditation Council for Graduate Medical Education (ACGME). Prior to joining the ACGME in 2007, Dr. Nasca, a nephrologist, was dean of Jefferson Medical College and Senior Vice President for Academic Affairs of Thomas Jefferson University. We asked him to speak with us about the role of the ACGME in patient safety.

Dr. Robert Wachter, Editor, AHRQ WebM&M: Can you tell us what the ACGME does?

Dr. Thomas Nasca: The ACGME is a not-for-profit corporation whose sole purpose is the enhancement of patient care through accreditation to improve graduate medical education. As such, we accredit all of the allopathic graduate medical education programs in the United States, about 8900 training programs, which trained about 111,000 residents and fellows in 2009.

RW: How do you think the ACGME's work has changed in the last 10 years with the modern patient safety movement?

TN: The ACGME has evolved from being intensely concerned with not only the impact of education on our residents but also on their patients. It's increasingly clear that one cannot educate a resident to provide superior and safe patient care unless they're educated in an environment that provides superior and safe patient care. So you will continue to see an evolution in how we approach the institutional dimension of ACGME accreditation. In other words, in accreditation of the teaching institution itself, we will place greater emphasis on the quality and safety of patient care under the rubric of the six core competencies of physicians, with special emphasis on systems-based practice and practice-based learning and improvement.

RW: Is competency largely about embedding residents in systems that do this work of safety and quality well, or is it more about teaching them how to do that themselves for the time after they leave their training milieu?

TN: Those two dimensions are inextricably linked. I don't think that you can educate a resident to provide superior, safe patient care in an environment that doesn't provide such care. All you can do in that environment is tell them about it. If you're not demonstrating that to them and they aren't learning how to interdigitate with a system in order to produce those safe patient outcomes in their daily work, they will not be educated from a practical standpoint in how to do it.

RW: It strikes me that one of the hardest things you have to do is balance the tension in training between autonomy and oversight. I imagine as we're thinking more about patient safety and how to ensure it, there's a tendency to build in greater degrees of oversight, which comes with inherent challenges about how to get trainees to become independent providers by the end of their training period. How do you think through that tension?

TN: First, it is important to start with some global assessments. Most of the analyses looking at the quality or the safety of patient care in teaching hospitals would indicate that the aggregate outcomes are better in teaching hospitals than in non-teaching hospitals. That's a complex phenomenon and a balance of many different dimensions of those systems. So, we start from a position that there is not some dramatic deficiency in the safety of patient care in teaching versus non-teaching hospitals. That said, neither place is ideal. Our goal then is to provide education that occurs in the context of delivery of safe patient care, which requires a different way of thinking about the logical progression toward autonomy of our trainees. In the past, we have thought about the concept of graded responsibility based on position of the resident in the program. In other words, a PGY1 house officer would require a greater degree of supervision than a senior house officer would. But these were not necessarily tied to the level of progress of the individual trainee. To assess the level of each trainee on a more standardized and sophisticated level, I believe we are moving toward the delegation of authority and determination of the degree of supervision required based on actual performance of the resident, what we would call achievement of milestones in the competencies.

What I would envision as the ideal educational setting is an institution that has the redundant safety systems required in the form of oversight. These systems ensure that errors do not reach patients, which is really what teaching hospitals have to be sure of, and assessment systems ensure that trainees are not given additional responsibility and additional authority with less supervision prematurely. We need to be sure that they are prepared and demonstrate the ability to move to that next level of less tight supervision and greater autonomy in the care of patients. Both these systems, those protecting patient safety in the learning environment, as well as educational supervisory systems that permit independent judgment and decision-making, must exist in order for educators to fulfill our responsibilities to society. We have dual responsibilities not only for the safety of patients in the training environment, but also to ensure that individuals who graduate from our programs are capable of proficiency in practice in a US system with limited or no supervision in individual clinical decision-making.

RW: Does that mean the length of training programs becomes fluid?

TN: I don't believe so. At least not in our current thinking. But within the training program, the time a trainee spends in various phases of the resident's experience may become competency driven, rather than solely time driven. There's a whole body of literature looking at the development of mastery and the amount of intentional practice required to move from being a novice or an advanced beginner through competency, proficiency, expert status, and mastery. Currently, most programs are structured to deliver a product that appears without standardized measurement criteria in the competencies, other than for medical knowledge. But within that time course, I think we will see fluidity, for instance, in the time to perform the tasks of a first-year house officer versus a junior or senior house officer at the level of the individual trainee. One of the things that we'd like to get to at a national level, which would translate to the individual trainee, is, in each specialty, a standardized assessment and entry into residency training so that we can understand where our trainees are, because they're a heterogeneous group. For a long time, we've assumed that everyone coming out of medical school or entering a residency program was at about the same level, and we have empiric observations that that's not the case. We need to create assessments so that we can develop individual educational plans for each one of these young physicians to make sure that they meet the milestones that we expect at graduation.

RW: Some of the things you've articulated create some overlap with other organizations. For example, you talk about embedding trainees in systems that are safe, and I think that's partly what people would think of as The Joint Commission's role. You also talk about assessment of trainees at various stages, and at some level we think about that as the role of the boards. How do you think through all of the overlapping Venn diagrams between what ACGME does and other existing enterprises?

TN: In some ways, our structure actually facilitates that dialogue, in that the boards are one of our five member organizations who nominate not only to the ACGME, but also to each of the specialty-specific residency review committees (RRCs). So we have representation and work very closely with the boards. One of the interfaces now being stimulated by our discussions with American Board of Medical Specialties (ABMS) is how to create evaluation tools and tracking tools that are durable not only through training but actually through the maintenance of certification process. We are beginning some of that work in a bidirectional fashion. Some of the tools that our boards are developing can, we hope, be downloaded into residency training. We'd like to teach residents how to, for instance, use practice-based learning and improvement tools during their residency training, so it becomes seamless for them to use similar evaluation tools throughout their life as a physician. One of our standards is that the teaching institution be accredited by The Joint Commission. We are learning from them the kinds of systems that we would like to see in our teaching hospitals—the redundant safety systems required to assure the public that while we are educating these young physicians, their care is at the highest level.

RW: The decision to limit duty hours may, at least in my lifetime, have been the most controversial thing I've seen ACGME do. Looking back now 5 years later, how do you think that's gone?

TN: In some senses, I believe that it's gone remarkably well. This occurred in the context of standardizing a core set of program requirements that affect residency training programs in every discipline. So it is woven into the fabric of every set of program requirements no matter what the specialty or subspecialty. From an implementation and an oversight standpoint, we have very good data indicating that the vast majority of residency programs, the vast majority of the time, are well within compliance with these standards. We do have evidence of occasional lapses. We identify this by asking residents to evaluate their experiences in a resident questionnaire administered every year to every resident and fellow in the United States. We ask them to rate their individual compliance with the duty hours. Now this is often where the rub is. We accredit programs using a concept of substantial compliance, which means that the vast majority of the time the program complies with our requirements. On occasion, an individual trainee may vary from what the program actually says they're supposed to do. What we try to do is make sure that very few of the residents ever find themselves in a situation where they feel that they have worked in excess of what they are supposed to.

RW: So you've identified that people are complying with it. Is it working? Is it doing what ACGME intended it to do?

TN: I can't answer that because I don't know exactly what ACGME intended to do with it. You remember that the implementation of duty hours in this context was a complex political phenomenon as well as an educational phenomenon. What I can say is that both in the literature and in anecdotal discussions with residents at site visits and results of comments rendered in the residents' survey, it appears that residents are better rested. Residents have less fatigue, although there are still fatigue issues, and residents' psychologic posture is better. It is also clear that despite the assumption that when residents were provided additional time free from duty they would use all of that time for sleep, what they are doing is using most of that time to do many of the other things in life that people at their age do. There was this assumption that if you send residents home for 10 hours, they would get 8 hours of sleep. Well, they don't get 8 hours of sleep on the weekend. They get 5 or 6 hours of sleep because most "20 somethings" do that. And they use the other time to do their laundry and banking, or they go to the movies or they read or they study. They are having a more realistic life than perhaps we did in training. That is not a negative outcome; that is perhaps an unintended positive consequence. But what it does mean is that the issue is much more complex than just giving them enough time to sleep.

RW: Let me bring up three concerns that people talked about with duty hours reduction. One is the handoff. A second is that residents are not going to see enough volume, and the third is that they are developing a shift work mentality. Can you comment on those from your own personal thoughts or the organization's thoughts?

TN: I'll give you my personal thoughts about them, because I hesitate to speak for an organization as diverse as the ACGME. First, the issue of handoffs is there no matter how many hours residents work, as long as they don't work continuously for all years of training. The issue of handoffs is present in the health care delivery system independent of the teaching environment: practicing physicians hand off patient care to other practicing physicians as part of the natural call schedules of independent practice. Nurses hand off patients to succeeding shifts, and physicians hand them off in training. I think that the issue from a residency training standpoint is more complicated, in that the physician is in training and indeed needs to learn how to effectively accomplish a patient handover. But we need to recognize that residents are not the only people handing these patients off. Handoffs are occurring at multiple levels in the system at the same time.

Second, the number of handoffs does not increase dramatically with changes in the residency duty hours structure. If you go from every fourth night to every third night, the number of handoffs goes up by a small number because they're still making handoffs each day—who's making the handoff is the difference. And if they're only working 30 hours instead of 36 hours, the time of the handoff changes, but the number of handoffs does not of necessity change. All of that said, an extensive evolving body of literature indicates that it is even more complicated than we had originally thought, because the intentions of the physicians involved dictate the nature of the handoff, and the expectations around the use of the information vary depending on the setting. For instance, in some settings the physician who takes call is expected merely to be aware of the patient in case they receive a call. In other settings, they are expected to actively manage a patient's problem. The nature of the handoff needs to be different in those situations. So we need to develop a better lexicon around the terminology we utilize. Also, the level of information needed and the kinds of information needed are different for different levels of training. A more seasoned clinician, even at the level of a senior house officer, has an encapsulated frame of reference around diseases that is more sophisticated and more insightful than a first-year trainee. So the information necessary and how it is provided will be different. Clearly, we need to understand this better, and we need to make sure that there is oversight of the patient care transitions, which currently does not necessarily happen. Perhaps we need to consider the standardization of expectations around the information given during a transition, and the standardization of the oversight of those transitions.

RW: The second issue was that trainees simply will not see enough volume.

TN: I think we're starting to see some data, so for instance it appears that the technical surgical experience of residents under duty hour restrictions may not have changed all that much, other than the natural evolution of the procedures. What is concerning, though, is that the pre- and postoperative care opportunities and the continuity of pre- and postoperative care are being disrupted by duty hours in surgical disciplines. The number of patients is not actively tracked in internal medicine and some of the non-procedural disciplines. There, the issue is more continuity of experience in the acute illness phase rather than an issue of procedure volumes or clinical volumes. In most medicine programs, there are plenty of patients. The discontinuity is disrupting their educational experience. So again, we're starting to see data, but we don't have any outcomes data linked to that to know whether that causes a problem from an outcomes standpoint yet (either educational or clinical).

RW: The third issue was: are they developing a shift work mentality? Is there a change in the philosophy of practice and the way they see their work? And is that a bad thing?

TN: Well, I think that it would be disastrous if we produced physicians who viewed their work as time-limited in the sense that when a patient was in need in front of them that they would place a limit on time and walk away from a patient. That is an outcome that none of us can accept. However, we also have a responsibility to educate physicians to be able to recognize their limits. For instance, if physicians are ill, they should not be caring for a patient and they should be providing back up so that another physician who is well can care for that patient. Similarly, if they are excessively fatigued and not able to render high-quality patient care or clinical judgment, they should assure their patients that they're being cared for by a physician who can. I think we have to get to the point where we view these transitions as an educational experience in knowing when a physician has reached his or her limits. There obviously is debate about what that limit should be: Is it a rigid limit? Is it an arbitrary limit? Or is it an individually determined limit? I don't think we're there yet in completely understanding the answers to those questions. There are some determinants in the sleep literature indicating that we have some ways to go in setting up systems to educate physicians to recognize their fatigue and also to recognize, by the duration of their time on task, that they may not be optimally prepared to take care of their patients. It is here, perhaps, that "fitness for duty" assessments will help us clarify the issue.

RW: Clearly, going forward the organization has to make decisions under some uncertainty about what the right number is. Can you give any insight about how you in the organization think through getting to the correct number of hours, if there is such a thing?

TN: Yes. First let me give you my reflections on that, and second let me tell you what process we're going to go through to begin to look at that. The ACGME promised the medical community that it would take another look at our resident duty hour standards in 5 years. We are just now graduating our first group of general surgeons under these standards. We haven't yet graduated a cardiologist or a gastroenterologist completely trained under these standards. So there's not a lot of outcome information. We fully intend to keep the promise to look at these duty hours and revise them if necessary to continually evolve these standards. I believe that it would be highly likely that we will begin to look at perhaps modifying these duty hours in a specialty-specific fashion. What do I mean by that? Well, if a dermatology resident is working 80 hours, they're likely being abused. There will be discussion about the primary care disciplines. Because there are significant challenges, for instance in pediatrics and family medicine, around education and the time allotted to providing in-depth experience and continuity experience within the current structure. And we also must look at level of training in crafting duty hours. For instance, in preparing them for independent practice, the senior surgical house officers' duty structure may need to be different from the first-year house officer, for instance, in surgery.

Other interesting data are coming out of our analysis of resident duty hours here at the ACGME: the house officers who work the most hours are the most junior house officers. If one were designing this from a training standpoint, that is probably the least logical way to do this. So we may have major structural issues in some disciplines as to how residencies are configured. I use internal medicine as an example because that's what I'm most familiar with. We have the least experienced house officers caring for the sickest patients and working the most hours. Does that make sense? Yet, our senior house officers are off walking around with consultants most of the time. I don't think that's a logical training model to produce a clinician ready to enter independent practice at the end of 3 years. We need to begin to talk about those issues because it may not just be duty hours. There's an epi-phenomenon that I would like to introduce here. In the resident questionnaire, we ask questions beyond duty hours. What we find is that statistically if we see duty hours violations in a program on the resident questionnaire, we see learning environment challenges too. It's issues of supervision, it's issues of infrastructure, it's issues of quality of education. They seem to march hand-in-hand in the majority of cases, and statistically they are very highly correlated. So we're talking about more than duty hours violations here. We're talking about less than ideal learning environments for our trainees. And we need to get at all of those issues, not just the duty hours.

RW: Both of us are former residency directors. As you think back during your time as residency director, what were the things about ACGME and the RRCs that bothered you, and how does that influence your work today and the way that you think about the organization?

TN: I wrote a piece (many years ago) for "Careers in Internal Medicine," for the Association of Program Directors in Internal Medicine, and I said that the ACGME creates prescriptive regulations that burden the 80 or 90% of programs that are doing a good job to get at the 10% of problem programs. My goal is to evolve the ACGME's accreditation process. To move it to a continuous improvement process that, instead of taking a biopsy every 5 years, continuously monitors the outcomes of residents, the residents' impressions of their training, and the core administrative and infrastructure parameters on an annual basis. To place a greater degree of oversight on the institution and then lengthen our accreditation cycles from 8 to 10 years. And then drop site visitors in if there's a problem. So we will be able to hold the programs accountable not only for their processes but for outcomes. Obviously, it's going to take us a few more years until we get those outcomes measurable. But once we have that, the ACGME will be able to keep track and make sure that a program doesn't have problems, but also stay out of the hair of the programs so that we're not visiting them constantly, very much the way the Liaison Committee on Medical Education (LCME) monitors medical schools. I've been fortunate to have been a member of the LCME as well and to have learned that accreditation process, and I'm trying to apply some of the strengths of that system to what we do in the ACGME.

RW: I would assume that some of the outcomes at a place like UCSF, with trainees as good as they are coming in the door, are likely to be very good, even if the training isn't as good as it should be. How do you adjust for the substrate?

TN: One of the benefits of having national data is that we will not only have their performance parameters, but we will have their predictive parameters as well. My ultimate goal is to be able to look at the value added of the educational program. Now I actually don't believe that I will see that as head of the ACGME, because it will take us more than a decade to develop that data. I am very hopeful that two other things will happen. The first is that we will link not only with the medical schools and others to provide intermediate outcomes back to them so they can improve their educational programs, but we also will work so closely with the boards that we will have the ability to look at performance predictors in residency and link that to performance in practice. The next logical step is that we would then have a data system and a reporting system that tracks the performance of trainees throughout their training. We would have the ability to link that to clinical outcomes of our patients, which is our ultimate mission: improvement of quality of patient care through education of the next generation of physicians. We would have the ability to conduct a research project through all of the educational programs in that specialty across the country and then intelligently change program requirements based on empiric observation as opposed to professional opinion, which is our current method of creating standards. My hope is that we will become the model for outcomes-based accreditation and national, continuous improvement of education. That's obviously the long-term goal, and the only way that we will be sure that what we do in education actually has meaningful benefits for our patients. After all, that is the goal for all of us.


This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Related Resources