Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation With… David M. Gaba, MD

March 1, 2013 
Save
Print

Editor's note: David M. Gaba, MD, is a Professor of Anesthesia at the Stanford University School of Medicine. An international leader in health care simulation, Dr. Gaba helped introduce the modern full-body patient simulator and the concept of crew resource management (CRM) training to health care.

Dr. Robert Wachter, Editor, AHRQ WebM&M: In some ways, the role of simulation seemed self-evidently important and right. After seeing how it had worked in aviation and other fields, what did you think would happen in terms of its uptake in health care and what has happened?

Dr. David M. Gaba: There have been a number of milestones in the uptake and development of simulation. In the IOM report, simulation was briefly mentioned as one of the strategies used in other industries to increase safety. Originally, the major barrier was people said, "Oh, that's a nice toy," but did not think it could be used for anything very interesting other than maybe training the rank novices. The parallel that we made and developed over time with the CRM approach was very helpful in getting simulation to be recognized as more powerful than just for early learners and applicable to all levels of learners.

In the last decade or so, perhaps the biggest change has been the development of more affordable simulators. The original commercial simulators cost around $200,000 to $250,000, which compared to the usual budgets for education and training equipment is a lot of money. About a decade ago, manufacturers developed simulators that, I like to say, do about 70% of what the original big ones could do but at 15% of the cost. This qualifies as a disruptive innovation, where companies come out with less capable devices at a much lower price point and thereby open up huge new markets. At the price point of $25,000 to $35,000, simulators then became available to a broader set of users across the spectrum of health care education and training. That was a critical factor in jump-starting the exponential growth over the last decade.

RW: Do you think the fact that simulation was seen as a cool toy hurt the field or advantaged it in some way, that people thought it was interesting and novel?

DG: It had both effects. On the one hand, being seen as a high-tech toy was good in attracting people in health care who had that interest and background. Many pioneers in simulation were clinical people with engineering or other technical backgrounds. But it also was a barrier because the people steeped in more traditional clinical education or formal training did not view it as a serious educational modality.

RW: You mentioned the remarkable decrease in cost of simulators that were nearly as robust as the earlier expensive versions. What other key changes have promoted the technology?

DG: Another key factor was making the simulators more portable. The original generations of simulators were largely tied to a fixed site. You could move them around, but it was very difficult. Now they are largely self-contained and can be run wirelessly, which makes them accessible to many different venues. You can use them for arenas like field responders, and within an educational institution you can move them from one room to another, from one floor to another, from the medical school to the hospital to the nursing school to wherever. The manikin-based simulators have improved in some features but not a lot overall. It still is very difficult to produce realistic plastic people. That may not be surprising because human beings are remarkable, but many things still are not systematically or categorically better than they were 10 years ago.

RW: If a breakthrough in technology would allow us to do more important things, what are those things?

DG: Well, one holy grail is to have completely believable, fully interactive computer-generated coworkers. To do a simulation with a whole team of people, you need the whole team present. For an individual student, a few students, or other learners at any level of training to do a simulation in which they could interact with a whole team, assembling that crew of people is a barrier. That is more at the top of my wish list than this or that particular feature of what would make a manikin more realistic.

RW: One obstacle in the early years was that there was a tremendous amount of face validity and some evidence from other industries, but the evidence in health care was not robust enough to overcome the logistical and price barriers. How has the evidence base evolved?

DG: People are making some progress at measuring various sorts of outcomes. In fact, our journal (Simulation in Healthcare) has pretty much stopped accepting papers that only have the lowest level in the Kirkpatrick scale of the reaction of the learners—we like to call them the "we-liked-it" type of outcomes. We are looking for outcomes above that level. We have adopted and adapted the translational research nomenclature. We consider a level one study as one where the outcome is measured in simulation. A level two study would be where you look to see if their actual clinical performance has been changed. A level three study would be whether patient outcomes have been changed. In the translational research paradigm, these are called the T levels: T1, T2, T3. There are some levels beyond this—having to do with dissemination, adoption, and whether population outcomes change. What most people would be shooting for is T3 studies of patient outcomes, and there are really very few of those to date. It is very difficult to study all the way up to that point, with many different confounders.

Right now, T3 studies are pairing simulation along other modalities and activities to improve safety and reduce adverse outcomes from the insertion of central venous catheters. Some good studies show benefits in reducing adverse outcomes and the costs thereof. The intervention is pretty circumscribed in that arena. The problem is very well defined, and we already do surveillance for the occurrence of those adverse outcomes. It is a perfect nexus for proving that. When we look beyond that arena to much broader sets of outcomes and interventions, especially those that are aimed at rare but catastrophic events, it actually may be impossible to get the level 1A evidence that everybody is seeking.

RW: In some ways this gets at policy issues of whether simulation should be required of trainees at various levels. Some people push back and say, "The evidence is not strong enough and doing it is too expensive," despite the face validity. Where do you come down on that?

DG: I am published pointing out that if we were trying to use these studies to get a new drug approved, they would be completely insufficient. Part of the reason is that, unlike in pharmaceuticals, there is no deep pocket willing to pay for the large, long, and complex studies needed to get level 1A evidence. We will and should continue to chip away at the evidence base and develop it as far as we can, but at the same time, we have to recognize that we may never know the answer to certain questions. In fact, we have very little evidence to prove the efficacy of our traditional educational, training, and assessment paradigms. We already know that we still are hurting thousands or hundreds of thousands of patients per year from preventable medical errors and suboptimal care. If we want to keep getting the same results, we can continue to do things the same way. I think people have been voting with their feet; we have seen a dramatic and essentially exponential rise in the adoption of simulation for all levels of learners—from early learners all the way up through experienced personnel in actual clinical arenas.

RW: Is your sense that simulation should be a requirement of medical school students, residents, or practicing physicians before they are deemed competent to do new techniques?

DG: I am biased, but I certainly think it should be applied and there should be incentives if not downright mandates to do that. That's both at the level of teaching and learning, also at the levels of formative assessment. Even though it's a "Kevlar vest" issue, I do believe that simulation has a significant role to play even in high-stakes assessment of experienced clinicians as they transition from their training to full experienced practice, whether physicians, nurses, or allied health personnel. The vision I have is that people will cycle through different modalities of simulation and cycle through as individuals and teams and whole work units for an entire career. The only way you would get out of doing simulation on a recurrent basis is if you retire from the business or if you die. That is essentially what is done in other industries where we have all come to expect ultra-safe performance. So when we get on an airliner nowadays, we have every reason to the nth statistical degree to expect that this crew of people who have never met before are going to do a fine job of getting us from point A to point B. Simulation is clearly one of the ways that has been accomplished.

RW: Talk about the fundamental differences between aviation and health care that allowed the imposition of mandatory simulation in the former and has made it so difficult in the latter.

DG: There are big organizational differences between the fields, so even though there are many cognitive parallels between some aspects of the work, at the organizational level it is quite different. In aviation, for example, one federal agency is responsible for regulating the operation and safety of aviation, the Federal Aviation Administration. No federal agency regulates the practice of medicine. For example, we have an agency that regulates the drugs and devices, and they have an agency that regulates how people are paid. The states and the three federal jurisdictions of health care control the regulation of who can practice, what they can do, and what the standards are and so forth. In general, it is not only extremely diverse in how that is done, but it is relatively lax compared with other industries.

Aviation is a regulated industry. By the stroke of a pen, they impose requirements for certain kinds of training and performance assessment. You can do it in a real airplane or you can do it in a simulator. Everyone chooses the simulator because it is safer, cheaper, and probably more effective. It is a completely different organizational paradigm, on top of which about 12 airlines account for 95% or more of the passenger miles flown. We have approximately 6000 hospitals, an equivalent number of standalone surgi-centers, and hundreds of thousands of doctors' offices, clinics, and other facilities. The scale of decentralization of health care is massive. Whereas, even though aviation is spread out over many different airports and airlines, it still is a much more compact and cohesive undertaking. Those organizational differences are profound.

The flipside is that we design and build airplanes and we know how to make them safe. We know how they work. Whereas, we don't design and build human beings—we don't even get the instruction manual. Health care will never be exactly like aviation, and we don't really want it to be. I like to say though that our pendulum is way to one side, and aviation and nuclear power are perhaps way to the other side. We don't need to go all the way to that other side; we just need to be somewhere in the middle to reap the benefits of some things they've done while retaining the flexibility and resiliency and the clinician–patient relationship that is so important in health care.

RW: That all rings true and yet, at least theoretically, The Joint Commission tomorrow could say one of the ways that hospitals need to assess that their doctors are competent is through simulation. The boards or large health care organizations could also theoretically do that. Do you envision that happening?

DG: We have certainly seen it in other places around the world. In Israel, a simulation-based examination in anesthesiology is now part of the board certification process. In the United States, the American Board of Anesthesiology has required taking a simulation-based course, largely modeling the CRM approach that we pioneered 22 years ago. It is now a requirement for Part 4 of a 10-year maintenance of certification cycle. We know that other domains in health care are starting to consider adding various forms of simulation to board examinations and other examinations. One kind of simulation is already a required part of medical licensure—the clinical skills exam, which is a set of standardized patient actor encounters. It is a requirement for the USMLE (United States Medical Licensing Examination). These things are starting to percolate into the system. Whether and when organizations like The Joint Commission or large hospital networks will start to mandate this is anybody's guess. I think those days are coming; certainly some of them are already formally and informally encouraging the use of simulation to improve care. How they will roll that out in various places and to what degree they will make it mandatory versus encouraged is one of the hot topics where a crystal ball would be really handy.

RW: If a large health care system wanted to get into the simulation business, it strikes me that they could go in many directions. Pieces of it relate to improving cognitive skills or history taking. Pieces of it relate to performing procedures. Others relate to teamwork behaviors. That is all on the improved performance side, and then there is the whole assessment side. Where would you recommend that they go first?

DG: It really depends on where they believe they could use the most improvement. Ideally, we would like all those things to be done everywhere and simultaneously because there are certainly many ways that people have demonstrated with some pretty persuasive kinds of experiential data or information that simulation can be useful. There is no one size fits all. It really depends greatly not only on what the needs are but also where the zealotry lies within the institution and what seems most interesting to them to tackle first. So many things could be done, and we have seen a lot of "paralysis by analysis" as people try to figure out what the ideal thing is. We are believers in the Nike philosophy of "Just do it." Because it is better to start trying some of these things, seeing what improvements you can make, and then working from there rather than trying to figure out the ideal at the beginning and having a very rigid plan.

RW: You are dealing now with a new generation of trainees who grew up in a simulated world, with World of Warcraft or Facebook. How does that change the nature of your work?

DG: We have seen this change over the last 20 years. My generation and those a little older than me were very leery of technology of any kind and of anything electronic. Obviously, with more recent generations, that intrinsic fear and distaste has completely evaporated. All the generations do share issues and concerns. Since the health care arena has largely lacked rigorous performance assessment of experienced personnel, once they get past their training stage and perhaps past board certification, the notion that there might be recurrent examination and simulation-based assessment is an issue for all generations. Nevertheless, the intrinsic fear of technology is largely evaporating as some of us old timers and those older than us are retiring away.

RW: Is the main obstacle then that we are left with just fear of looking bad? Are concerns about shame the main problem to overcome?

DG: That is an issue. Institutions, instructors, and curricula need to address those kinds of issues head on—often we are aiming to create a low jeopardy environment, especially for the teaching and learning exercises, as opposed to performance assessment. I strongly believe that we should keep performance assessment sessions separate from teaching and learning sessions. People need to know when they are in an exam, especially an exam that has consequences. I think examinations are fine with simulation or other means, but people need to know it is a test. Conversely, we think learning opportunities work best when they are low jeopardy learning settings—where the idea is to extract the maximum learning rather than to worry about how you are doing on the examination. But if instructors do those things properly and sensitively, we've largely been able to overcome the intrinsic and natural fear that people have of being watched by their peers or even of the learning environments—doing a simulation is intrinsically testing even when it's not a test for a grade. That is a little nerve-wracking and stressful for everybody, but we remind them that if it was not challenging and stressful there would probably be no reason for doing it with relatively experienced people and we'd just let them go back to do the regular clinical work and learn what they can learn and hone their skills from that. But we know that in general that is not enough to optimize patient care.

RW: I wonder whether the whole direction of MOOCs (massive open online courses) and distance learning will transform simulation, obviate the need for onsite presence, and allow you and others to scale up what you're doing to more distant audiences. Is that beginning to happen?

DG: What we are seeing right now (Stanford is one of the schools at the forefront of this movement) is often called the flipped classroom approach, where a lot of the didactic materials are in online videos, lectures, and interactive things that people can do from the privacy of their home. Simulation is a perfect thing to do for that interactive and experiential component where we do not yet have the kinds of virtual worlds that would be highly effective and able to replicate the full realism and complexity of the clinical world. So right now, that is where the MOOC, the flipped classroom approach, and simulation are interacting. Someday, another one of the holy grails is to have the Star Trek Holodeck—virtual environments so realistic that you cannot tell them from the real thing—able to be done in decentralized ways. We are not anywhere near there yet, but we will see how all that plays out over the next couple of decades.

RW: Do you see any disconnect between what people learn in a simulated environment and then when they deal with real people, or does it feel like it is directly translatable?

DG: That is one of the most difficult things to study. We have opined that to study some of those issues of transference to the real world you really need an ethnographic approach where you embed observers in the real world. Those would be long and complex studies. One thing to remember is no one doing simulation is suggesting that simulation can replace the traditional apprenticeship model of health care, where a lot of your experience is gained by working with real patients under the supervision of people who already know what they are doing. What we are really trying to do with simulation is fill in some of the gaps, do things we have never been able to do. One thing we can do for trainees with simulation is we can allow them to be "it." Years ago, physicians (and I'm sure this is true for nurses too) learned to be "it" in part at charity hospitals, VA hospitals, and other settings where they were given more freedom to work independently even though they were still learning the craft of health care. Nowadays, fortunately I think for our patients, we don't really allow that anymore. The first time that a resident really will be "it" and be completely the final decision-maker in patient care is when they get out and finish their residency and take care of an unsuspecting populace, as I like to say. With simulation, even the very early learners (the students, the early interns, and residents) can start to feel what it's like to be "it" and learn aspects of decision-making and teamwork that are part of being the final decision-maker. We cannot do that safely in real patient care, but we can do that safely in simulation.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources