Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation With… Matthew Weinger, MD

August 1, 2018 
Save
Print

Editor's note: Dr. Weinger is Director of the Center for Research and Innovation in Systems Safety and Professor of Anesthesiology, Biomedical Informatics, and Medical Education at Vanderbilt University. He holds the Norman Ty Smith Chair in Patient Safety and Medical Simulation. We spoke with him about the current state of simulation training in health care, barriers to progress, and potential innovations.

Dr. Robert M. Wachter: From the early days, was your view of simulation that it was about patient safety? What were your thoughts of what it could do for health care and for training?

Dr. Matthew Weinger: During my fellowship, I became interested in human factors engineering. I've always taken a systems view. I've done all kinds of simulation, including computer simulation of systems to mannequin-based and human-based simulation of care. But across that spectrum, I early on continued to believe that simulation has a role not just in training but also in system and process improvement and in assessing practitioner competence.

RW: Looking back on when you got interested in it and got involved as a faculty member, how has it played out? About how you expected, or have there been interesting bumps in the road that you didn't anticipate?

MW: During my fellowship, I realized that it was early in the development of the field, and it was going to be a long path before I could generate a lot of research papers doing simulation alone. Also, the funding wasn't there. So, I was seduced by behavioral neuropharmacology, and that was a very successful path to promotion for me. I continued to dabble in human factors and simulation for about 10 years until, with the IOM report and the emphasis on patient safety, all of a sudden interest and funding were there, and the field had developed technologically where it wasn't a huge ordeal to do one study.

I would say the field has not developed as quickly as I might have predicted. While I was a fellow, David Gaba at Stanford and Michael Good in Florida were just beginning to create the modern mannequin-based simulators. Around the same time, David had translated aviation crew resource management into anesthesia crisis resource management. That has made a huge impact on the field. But what I've learned in academics is that nothing goes fast, and that has been true of the evolution of simulation. I really thought that we'd be much further along in terms of the physical simulation of tissue and of plastic humans. I also thought that the underlying physiologic models would allow animatronic human patients that you could give drugs to and they would react in the way that a human would react. We're still a ways away from really stable, accurate, reliable underlying physiologic models.

RW: Is it because simulation did not take off in the way that many had hoped that prevented there from being a market and enough money for companies to build these better simulators both anatomically and physiologically? Or the fact that the simulators are not quite as spectacular as we hoped?

MW: It's probably both, but I think there just is not a lot of money in health care education. The primary role for simulation for the last 20 years has been in training. Federal funding hasn't been near as generous in safety in general, but in simulation in particular, compared with many other fields. That slows progress, but it also reduces the number of people interested in pursuing a career in safety and simulation, because it's much easier to get funded in molecular biology or pharmacogenomics or whatever.

RW: Talk about some of the obstacles to the widespread diffusion of simulation. People hoped that both training on and being tested for competency on a simulator would become as ubiquitous in health care as it is in aviation. Why hasn't that happened?

MW: It's a function of the structure of health care. To support that argument, you can look at the VA and Kaiser as examples of integrated employee-based systems where training is more widely accepted. The VA is maybe less cost sensitive, so it has perhaps done more in terms of physician-based training than any other entity within the United States. Some other countries do more simulation, but there's a fundamental cultural belief that's still hard to shake—the "see one, do one, teach one" bit—it's only by practicing on humans that you really can learn. We still see that here at our place, "Simulation is fine, but I'm not going to let this resident leave this good case right now because that's far more valuable than their time in the simulation center."

Contrast that with nuclear power, where a safety culture is imposed in part by the regulators and in part supported by the fact that any plant that has a safety issue affects the entire industry. Whereas, if a hospital has a safety issue that becomes publicly known, then the other hospitals in the community benefit [economically] from it. So, it's not a rising tide environment for health care. In nuclear power, every power plant control room operator must be in training 1 week in 5. That's part of their work. A substantial part of that training is in a high-fidelity, simulated control room; they have parallel control rooms, one is the real one and one is an identical simulated one. They practice virtually everything that they could conceive of happening in the real control environment. Health care is never going to do that because of the cost, the production pressure, the lack of regulatory requirements, etc.

RW: In a field like nuclear power or aviation, how do you know when you're doing too much? You can imagine 1 week in 5 for circumstances that virtually never happen. How do you calibrate that? I imagine that some of it they're training over and over again for things that essentially never happen.

MW: Interestingly, in nuclear power the operators will say that training week, while sometimes stressful, is the most interesting week of the 5, because during the other 4 weeks virtually nothing happens unless they're in a scheduled outage, which they also practice. And that happens once every 18 months. Do they do too much? Well, they have a pretty impressive safety record. Despite that, lots of events occur across the 61 nuclear power plants in the US. In commercial aviation, pilots don't train as often, but they do have annual simulation-based training and certification. Is that too much? Ask the end customer. You ask a question that gets at a fundamental problem with patient safety in health care; we cannot measure its presence, we can only measure its absence.

RW: Looking back over the last 10 to 20 years of research, how would you summarize what the research has told us about the value of simulation in patient safety?

MW: The best evidence is in technical or procedural simulation and especially in trainees. In that environment, it's unambiguous that practicing ahead of time reduces the time to complete real-world patient care tasks and reduces errors. Some studies show improved outcomes as well. We know that things we see in simulation also occur in the real world. We know that simulation-based interventions can change real-world behavior, both of trainees and experienced personnel. In 2015, we published an AHRQ-funded study where we showed that simulation-based training as part of dyadic teams substantially improved handovers provided by anesthesia providers to recovery room nurses. That improvement persisted for 3 years. Well done studies, most in procedural training, show beneficial effects on patient outcomes. Further, multicenter studies, for example from the VA and of obstetric teams, showed that team-based training can improve outcomes. Eduardo Salas and his colleagues recently published a meta-analysis that showed clear benefit of team training with simulation on patient outcomes.

Of all the things that simulation is good for, training teams in nontechnical skills—teamwork, crisis resource management, interpersonal skills, those kinds of things—is probably its best and most effective use. It's also the most expensive. There should be more and better evidence that simulation-based training interventions improve outcomes, but there are structural impediments to doing those studies. Funding is limited. While AHRQ has a longstanding R18 for simulation and safety (and I've been a grateful recipient of three of those grants), the total dollars available per grant limits the kinds of research that can be performed. To conduct a rigorous multicentered randomized controlled trial (RCT) of a simulation-based intervention on hard patient outcomes would require at least several-fold more funds than available in the AHRQ program. National Institutes of Health (NIH) hasn't to date been particularly interested. Simulation research can be more expensive than clinical studies, especially if they require removing experienced physicians from income generation to get the training. That said, I'd predict that a properly powered, multicenter, comprehensive RCT intervention study would definitively show that simulation-based training improves patient outcomes.

RW: As the payment world lurches toward value-based payment and outcome-based payment, do you think the evidence is strong enough to drive increased uptake? Or it's still expensive and complicated enough that simulation won't rise up as one of the things you do as you're being held accountable for outcomes including safety?

MW: In an optimal world of value-based care, you would see at least some organizations decide that simulation is the way to improve value in certain service lines. Simulation will also be more widely used if federal regulators intervene. For example, the FDA [Food and Drug Administration] may decide that new, complex, and high-risk procedures require people to demonstrate competence in its use in a simulated environment before they can use it on patients. There is some precedent for that. The Joint Commission may also decide to step in. There's a national initiative led by some societies and foundations to enhance clinicians' training in the use of complex medical technology before use. If that catches on, simulation should play a role there. Also, in terms of general clinical competence, one could see ongoing assessment of practitioners along the lines of a paper we published last year in Anesthesiology.

RW: Are the standards coming from the training program accreditors strong enough? Should they be saying that to graduate in an anesthesia program or a medicine residency you need to do x hours of simulation?

MW: Some of that is there already. Emergency medicine is probably on the cutting edge. The American Board of Anesthesiology launched a simulation-based component of its primary certification exam using OSCE [Objective Structure Clinical Evaluation] with standardized patients who are actors trained to portray patients and, in this exam, other professionals as well. I'm on the task force that has been writing and testing those scenarios. I think other boards, where appropriate, will follow suit. In fact, the surgeons are using procedural simulation like fundamentals of laparoscopic surgery as part of certification. There will be a move that way for new trainees, and those folks are then going to be more comfortable and familiar with doing simulation throughout their career. The American Society of Anesthesiology has a simulation training initiative, supported by the American Board of Anesthesiology as part of their maintenance of certification in anesthesiology (MOCA). For practice improvement, anesthesiologists every 5 years can enroll in an all-day, full-scale simulation course that focuses on crisis event management and teamwork skills. We'll do about 1700 of those this year across our network of 50 endorsed centers. That's unique across specialties in the US, although other countries are doing similar things.

RW: Talk about the technology. What do the simulators do now that they couldn't do 5 or 10 years ago?

MW: Well, the mannequins are a little bit less expensive, but I cannot say that they're that much more sophisticated. The underlying models have not really made advances. There are two kinds of mannequin-based simulators. Those where you basically script everything they do. In real-time, the simulationist can change mannequin-derived parameters. You can also create preconfigured macros so that when one of the participants gives a drug, you can push a button representing that drug and it generates a simulated response, such as a percent or absolute change of various parameters like blood pressure or heart rate. That's the most common type of mannequin used, partly because they're less complex and less expensive (under $100,000). The more expensive mannequins have more sophisticated electronics and underlying computer-based physiological models. These are less popular because they're a quarter of a million dollars or more and, in my experience, less reliable. But the lower cost mannequins may require more domain expertise to run them.

RW: Meaning that if someone is saying "push a drug," then there has to be someone behind the curtain showing what would happen if that drug went in, as opposed to that all happening automatically?

MW: Most sites use macros for actions that you anticipate. Yet, a surprising number of times people do things that you didn't expect even after you've run the same scenario multiple times. Regardless, there has to be somebody experienced monitoring what is happening to both maintain realism of the simulator's response as well as to be able to debrief the scenario. This is the most important aspect of a simulation experience; reflecting on what you did is a crucial step in adult learning.

Getting back to your prior question, in terms of other improvements, video graphics have finally started migrating from consumer gaming into health care so that the screen-based simulators are improving. These can be really good for training somebody to recognize a clinical situation and then do the correct technical things. "Oh, this is atrial fibrillation. I'm going to give a beta blocker." But they are not yet good at training people in nontechnical skills. Until our health care simulations are like multiparty games with avatars and actual real-time interactive voice communication, we're not going to be replacing mannequin-based simulation for team training. I suspect it'll be 5 to 10 years still before we'll see that.

RW: It seems obvious that both virtual reality and Alexa are two trends that you would think would dovetail here over time. Are those the two biggies, artificial intelligence built into the system and then virtual reality as the immersive environment?

MW: Virtual and augmented reality are ahead of voice recognition technologically. The military is probably doing the most to drive this, and we're making progress. Another big advance is the merger of standardized patients with all of the other simulation technology—the mix of carbon, silicon, and plastic. Most advanced centers are doing what we call mixed-reality simulation. This is a combination of the simulated clinical environment, a patient, which is typically inanimate if you're going to do anything invasive to it. But it can also be human actors portraying various roles that are scripted. We can also incorporate virtual reality or partial task simulators into the simulation (e.g., strapped to the patient). A coherent example of this is the "cave" or what I call the simulation holodeck. There aren't many of these because they require construction and are expensive, and there's not a lot of money in either education or research.

RW: And what is that?

MW: The cave is a physical environment where you can project onto the walls pictures of where you are. Some new technologies are less expensive and allow you to do that in a three-dimensional sort of way. They can also incorporate objects, smells, and sounds that really make you feel you're immersed in the real world, for example, on the battlefield doing a MASH-type casualty encounter. That is why the military is particularly interested, because they often have to provide care in hostile environments.

Two other advances that we're just beginning to see are going to be really exciting. One is augmented reality, which is different than virtual reality. You still wear Google glasses or similar eye-gear, but with augmented reality you see projected images while also "seeing through" the goggles. So you have an overlay of simulated material on either standardized patients or, in the clinical environment, actual patients. For example, instead of moulaging a standardized patient to look like they have smallpox, the person doing the training would wear glasses that superimpose those lesions on a normal healthy person. We're just starting to see this technology.

The other thing that is absolutely fabulous is computer-guided surgery. Here you take the actual CT scan of the patient you're operating on and that is then overlaid on the actual anatomy (e.g., using video screens or goggles) and/or to allow computer models to guide or assist the actual surgery. So, what does that have to do with simulation? Well, once you have the anatomy digitized, it allows trainees or even experienced surgeons to practice on the actual patient anatomy and physiology before they do it on that patient in the operating room. You can imagine, for example, in pediatric cardiac surgery—where you have unusual anatomy—that the procedure you'd normally do might have to be modified and you'd much rather figure out what works in a simulator before you go and try to do it on the actual patient.

RW: Do you find that experienced surgeons, for example, are willing to take the time to do that? The idea is unbelievably cool. I remember seeing a presentation about that 5 or 6 years ago that we'll embed the CT or the MRI result into the simulator, and you get to practice and run through the operation beforehand. The question is even if you build that magic, will an experienced doctor really take the time to do that?

MW: Complex question with a complex answer. As in any new innovation, there will be early adopters and those folks will do it. Then, the question is what drives further adoption. One thing I'm seeing with new technology, I'll use robotic surgery as an example, is that the patients are driving use. They come to our place for robotic urologic surgery—even though the evidence that it's better is not all that strong—because they think it's better and they want to have that. You could imagine savvy marketers of these new technologies saying, "Your surgeon will practice the procedure on a simulation of you until they get it right before they do it on you." That's something patients are going to want.

RW: I'd want that. That sounds good.

MW: The other drivers in health care either have to be economic or regulatory, and it's hard to know how that's going to play out.

RW: As you think about research directions for you and your team over the next 5 or 10 years, what are the most important questions you want to address?

MW: We want to better understand why clinicians do what they do in different situations. We are interested in both clinicians and patients. We did this study where we demonstrated that board-certified anesthesiologists who had been familiarized in the simulation environment were then given a crisis management case to manage, and blinded video raters scored the performance based on a separate group of domain experts' opinions about what optimal performance was. They didn't do so well. About 25% to 30% of the individuals did what the raters viewed as a poor management of one or more aspects of that case. The performance gaps that we saw in the study were similar to what has been reported in closed claims malpractice studies and other case reports, such as failure to escalate therapy when the initial therapy is not working; failure to use available resources effectively including calling for help; failure to speak up or engage with other team members when those other people's actions are required for successful management; and failure to follow evidence-based guidelines. But we don't know why people do what they do in these situations. We are interested in using cognitive engineering methods to really get at what people are thinking when they make decisions in the clinical environment.

We're also interested in advancing simulation for assessment. There's still a lot of work to be done before that becomes a valid and reliable approach for summative assessments. I'm particularly interested in using simulation to analyze systems, model proposed changes, then study whether those changes really made the improvements predicted by the simulation

RW: When you talk about modifying systems, what level of the system?

MW: The higher the level, the more effective it would be. Realistically, we're talking about doing this at the clinical unit level.

RW: Anything that you wanted me to ask you that I didn't?

MW: You were asking me about things that I was surprised about. There are some things that didn't surprise me, but I'm still disappointed in terms of the failure of health care systems to more widely adopt simulation training for safety critical functions. I am also disappointed in the relative lack of federal funding for simulation-based research and the lack of progress in translating advances in technology from other fields, particularly consumer video game and entertainment, into health care simulation applications.

RW: Seeing the new interest by the Googles, Amazons, and Microsofts of the world in health care, you wonder whether that will come along with some more innovative uses of technology in this area. The primary driver is taking clinical data and doing Google-y stuff with it. But as the company jumps into health care, you wonder whether it will drive along some of the other pieces.

MW: It's hard to say. These companies are looking for new ways to profit, and as I said, training is typically not a profit center. It's a cost center, and one that health care can ill afford in the current organization structure. So, unless we are successful at moving to a value-based model, I don't see big changes on that front. But there will be other opportunities—especially the simulation of big data at the system levels. If you ask me what I really want to do in health care, I would love to build a hospital from scratch based on safety—human factors principles with frontline clinicians and patients integrally involved in the design from the beginning.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources