Sorry, you need to enable JavaScript to visit this website.
Skip to main content

The Literature on Health Care Simulation Education: What Does It Show?

David A. Cook, MD, MHPE | March 1, 2013 
View more articles from the same authors.


The education of health care providers is an integral part of patient safety. Advocates of simulation-based education claim a particularly important role for this modality (1), given the opportunity it provides to rehearse skills in a risk-free environment, to engage in repeated and deliberately structured practice, and to be assessed and receive timely feedback.(2) While these assertions have prima facie credibility, it seems reasonable to take a step back and consider the evidence supporting and informing the use of simulation in health care education.

A series of recent meta-analyses has quantitatively synthesized the evidence for two common simulation modalities: virtual patients (3) and technology-enhanced simulation.(4-6) For the purposes of these reviews and the present discussion, virtual patients are computer-based interactive cases in which "learners emulate the roles of health care providers to obtain a history, conduct a physical exam, and make diagnostic and therapeutic decisions."(7) Technology-enhanced simulation requires "an educational tool or device with which the learner physically interacts to mimic an aspect of clinical care for the purpose of teaching or assessment" (6) and includes virtual reality systems, low- and high-fidelity manikins and models, cadavers, and live animals. The key distinction is that virtual patients require no specialized equipment while technology-enhanced simulation does. (A third form of simulation, standardized patients, has not recently been the subject of a systematic review, and—while important—will not be further considered in this perspective.) These meta-analyses collectively included more than 1000 individual studies, which enrolled well over 50,000 participants including student and practicing physicians, dentists, nurses, emergency medical technicians, respiratory therapists, and physical therapists. I will discuss the results in relation to four key questions.

First, Does simulation-based education work? More than 600 studies enrolling over 36,500 participants have attempted to answer this question by comparing simulation-based training against no intervention. In these comparisons, both virtual patients and technology-enhanced simulation are consistently associated with large, statistically significant benefits in the areas of knowledge, skills (instructor ratings, computer scores, or minor complications in a test setting), and behaviors (similar to skills, but in the context of actual patient care).(3,6) For direct patient effects (e.g., major complications, mortality, or length of stay), the benefits are smaller but still significant.(8) Clearly, simulation-based education works—at least when compared with no instruction.

Having established that simulation works, the next question may be, How does simulation compare with other instructional approaches? This question is challenging to answer because every study uses a slightly different simulation intervention and a slightly different "other" approach. However, evidence from more than 100 studies and 7000 participants indicates that simulation is non-inferior to other approaches.(3,4) Technology-enhanced simulation is associated with a small but statistically significant benefit for outcomes of knowledge and skills, while for patient-related outcomes (behaviors and direct patient effects) the benefits approached but did not reach statistical significance.(4) In sum, simulation-based education is as good as, but perhaps not substantially better than, other approaches.

However, in performing these analyses, my colleagues and I observed high variability between studies in all these analyses, suggesting that some simulation interventions are more effective in certain settings than others. This raises a third question: Why are some simulation interventions better than others (and how can we improve them all)? This is perhaps the most important question, and answering it requires comparative effectiveness research (9)—namely, studies comparing different simulation-based approaches to explain what works, for whom, and in what context. To this end, review of 289 studies of technology-enhanced simulation (enrolling nearly 20,000 participants) confirmed theory-based predictions that feedback, repetition, range of difficulty, cognitive interactivity, clinical variation, distributed practice, individualized training, and longer training time significantly improve skill outcomes.(5) Similar analyses for patient-related outcomes (behaviors and patient effects) revealed benefits of similar direction and magnitude that approached but did not reach statistical significance.

The final question is perhaps the most difficult: Is simulation-based education worth its costs? Although effectiveness is now well established, value judgments require consideration of costs—not only the price of the simulator (many of which cost upwards of $75,000) but also faculty time, training expenses, facility fees, and opportunity costs (i.e., what else could trainees do with their time?). Very few studies have enumerated these costs, and none has offered a complete accounting (10), leaving us very much in the dark when it comes to judging the value of simulation-based education. However, one thing is clear: More expensive simulators are not necessarily better. Numerous examples illustrate that low-fidelity, low-cost training models can yield outcomes equal to much more expensive simulators.(11)

So what can we conclude in light of these findings? Available evidence supports the following recommendations:

First, we should use simulation. The conceptual arguments for patient safety are compelling (1) and these, together with the empiric evidence summarized above, support the use of simulation-based rehearsal in addition to (and often prior to) learning through working directly with patients.

Second, simulation will not always be the best choice. Educators have a large toolbox from which to design instructional activities, and the tools they select should be determined by the instructional objectives, learner needs, patient safety concerns, and available resources. Technology-enhanced simulation is most often used for procedural training (6), while virtual patients are frequently employed to develop clinical reasoning (3), and these roles make sense conceptually.(12) However, we need more research aimed at clarifying how to choose between simulation and non-simulation approaches—rigorous qualitative studies exploring the strengths and appropriate niche of each approach, and meticulous cost-effectiveness investigations estimating the true (and comparative) value of simulation-based education.

Third, educators should use the evidence-based best practices outlined above (feedback, repetition, etc.). However, much remains to be learned about how to optimally implement simulation-based education. We need programmatic research focused on what works, for whom, in what circumstances, and at what cost. Comparative effectiveness studies will play a critical role in this pursuit.(9)

Fourth, the appearance of the simulator is less important than a proper representation of the key steps in the task (functional task alignment). High-cost, high-fidelity simulators are aggressively marketed and increasingly ubiquitous, but they are often unnecessary.(11) Rather than focus on the superficial appearance or a long list of available functions, educators should determine the key steps in a given task (task analysis) and create a simulation experience that emulates these steps as closely as possible. Simple, inexpensive simulators can usually facilitate this acceptably well.

Finally, selecting the simulator is only a small step in the development of simulation-based education, and it is usually not the most important decision. Successful simulation-based education also requires carefully selected and sequenced instructional events surrounding the simulated task, appropriate faculty development, and institutional commitment.(13,14) Institutions that enjoy these resources and commitment are seeing important gains in instruction and patient safety through the thoughtful use of simulation.

David A. Cook, MD, MHPEDivision of General Internal Medicine and Office of Education ResearchCollege of Medicine, Mayo Clinic



1. Ziv A, Wolpe PR, Small SD, Glick S. Simulation-based medical education: an ethical imperative. Acad Med. 2003;78:783-788. [go to PubMed]

2. Reznick RK, MacRae H. Teaching surgical skills—changes in the wind. N Engl J Med. 2006;355:2664-2669. [go to PubMed]

3. Cook DA, Erwin PJ, Triola MM. Computerized virtual patients in health professions education: a systematic review and meta-analysis. Acad Med. 2010;85:1589-1602. [go to PubMed]

4. Cook DA, Brydges R, Hamstra SJ, et al. Comparative effectiveness of technology-enhanced simulation versus other instructional methods: a systematic review and meta-analysis. Simul Healthc. 2012;7:308-320. [go to PubMed]

5. Cook DA, Hamstra SJ, Brydges R, et al. Comparative effectiveness of instructional design features in simulation-based education: systematic review and meta-analysis. Med Teach. 2013;35:e844-e875. [go to PubMed]

6. Cook DA, Hatala R, Brydges R, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA. 2011;306:978-988. [go to PubMed]

7. Effective Use of Educational Technology in Medical Education: Summary Report of the 2006 AAMC Colloquium on Educational Technology. Washington, DC: Association of American Medical Colleges; 2007. [Available at]

8. Zendejas B, Brydges R, Wang AT, Cook DA. Patient outcomes in simulation-based medical education: a systematic review. J Gen Intern Med. 2013. In press.

9. Cook DA. If you teach them, they will learn: why medical education needs comparative effectiveness research. Adv Health Sci Educ Theory Pract. 2012;17:305-310. [go to PubMed]

10. Zendejas B, Wang AT, Brydges R, Hamstra SJ, Cook DA. Cost: the missing outcome in simulation-based medical education research: a systematic review. Surgery. 2013;153:160-176. [go to PubMed]

11. Norman G, Dore K, Grierson L. The minimal relationship between simulation fidelity and transfer of learning. Med Educ. 2012;46:636-647. [go to PubMed]

12. Cook DA, Triola MM. Virtual patients: a critical literature review and proposed next steps. Med Educ. 2009;43:303-311. [go to PubMed]

13. Issenberg SB. The scope of simulation-based healthcare education. Simul Healthc. 2006;1:203-208. [go to PubMed]

14. McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. A critical review of simulation-based medical education research: 2003–2009. Med Educ. 2010;44:50-63. [go to PubMed]

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Related Resources From the Same Author(s)
Related Resources