Sorry, you need to enable JavaScript to visit this website.
Skip to main content

Safety I, Safety II, and the New Views of Safety

Scanlon M, Jacobson N. Safety I, Safety II, and the New Views of Safety. PSNet [internet]. Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2025.

Save
Print Download PDF
Cite
Citation

Scanlon M, Jacobson N. Safety I, Safety II, and the New Views of Safety. PSNet [internet]. Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2025.

Matthew Scanlon, MD, MS and Nancy Jacobson, MD | February 26, 2025
View more articles from the same authors.

Background and Context

Safety I and Safety II (Safety I/II) are not safety methods but rather perspectives on how to think about systems and safety. Specifically, Safety II posits that the systems of interest are complex, adaptive sociotechnical systems and that people are an integral component of the system rather than the historically applied paradigm of ‘people versus system’. However, any discussion of Safety I/II should be placed in the context of safety science and patient safety. It is easy to misunderstand and misapply Safety I/II without appreciation of how the concepts of safety science and patient safety have evolved over time. Further, as Safety I/II can be viewed as one of several newer perspectives of safety, context is critical to understanding implications for healthcare.1

Safety Science over Time

The domain of knowledge known as safety science can be viewed as “the interdisciplinary study of accidents and accident prevention”.2,3 The book Foundations of Safety Science describes over 100 years of the evolving science of safety with its roots in the work of Fredrick Taylor in the early 1900s and Herbert Heinrich in the 1930s. Importantly, Taylor’s Scientific Management of Safety (also known as Taylorism) and Heinrich’s behavior-based safety work manifest in much of the early patient safety work and what is called Safety I.4,5 Dekker described periods of thinking on safety science (Table 1), but full discussion of this history is beyond the scope of this Primer. Of note, remnants of prior periods persist, leading to various and sometimes conflicting perspectives of safety science.

Table 1. Periods in the Evolution in Safety Science Thinking.

Time PeriodSafety Science Model or Insights
1800s-1900Industrial Revolution
1900s and onFactory Maintenance/Design
1910s and onTaylor and Proceduralization
1920s and onAccident Prone
1930s and onHeinrich and Behavior-Based
1940s and onHuman Factors and Cognitive Systems
1950s, 1960s and onSystems Safety
1970s and onMan-Made Disasters
1980s and onNormal Accidents and HROs
1990s and onSwiss Cheese and Safety Management
2000s and onSafety Culture
2010s and onResilience Engineering

Adapted from Dekker (2019). Foundations of Safety Science: A Century of Understanding Accidents and Disasters3

Like healthcare, where discoveries and insights have taken us from evil humors to gene therapies, safety science has changed over time. However, it is unclear that the practice of patient safety has evolved in step with safety science.

Patient Safety over Time

Patient safety is a domain of safety focused on preventing harm to patients and, more recently, healthcare workers.6,7 Wears and colleagues describe three different epochs of patient safety activity, ranging from sporadic to cult to the breakout period, and also note two concerns: “a failure to resolve issues about the validity of the science...” underlying patient safety, and a marked decrease in “both the number and proportion of non-clinical safety scientists” in patient safety efforts.8

The decrease in non-clinical safety science experts in patient safety work is notable because some of the greatest successes in patient safety resulted from pioneering work in anesthesia beginning in the 1980s, with the creation of the Anesthesia Patient Safety Foundation in 1985. This work partnered engineers with clinicians to apply novel approaches to safety issues,8,9 as described in 1995, “human error in medicine, and the adverse events that may follow, are problems of psychology and engineering, not medicine.” In 1994, Leape echoed this work when he wrote about ideas from engineering and social sciences in the medical literature.10 Despite Leape’s introduction and the success of multidisciplinary efforts in anesthesia, patient safety work has been more recently executed by clinicians and administrators, excluding those trained in safety science. This choice has resulted in what Wears and Sutcliffe characterized as the stagnation of patient safety efforts,11 perhaps leading to the recently documented failure to reduce adverse events in patient care.12

The New Views and Looks at Safety

It is important to remember that Safety I/II are simply two of several perspectives that offer helpful ways to approach patient safety. In 2022, Le Coze summarized different perspectives on safety, often lumped under the umbrella of the “new view” or “new look”.1 In the 1970s, insights in cognitive psychology were applied to “error” and safety. Subsequently, James Reason applied cognitive psychology to error, and Rasmussen, Hollnagel, and Woods applied the framework of cognitive systems engineering. Their work collectively questioned how failure should be viewed, as well as the utility of the concept of human error.1, 13-15 This questioning of long held beliefs in safety science resulted in several different but largely complementary “new views” of safety including resilience engineering and Wood’s Theory of Graceful Extensibility, Dekker’s Safety Differently, Conklin’s Human and Organizational Performance (HOP), and Hollnagel’s Safety I/II.4,16-18

Safety I

The Safety I paradigm for understanding how to think about systems and safety has its roots in the work of Taylor and Heinrich.4,5

Definition of Safety and Safety Management Principle

Safety I defies safety as having as few things go wrong as possible. As such, the starting point and perspective for evaluating safety is the detection of an undesired outcome.4,19 This ‘find and fix’ approach imbues any case-based examination with outcome and hindsight bias, and further enables the incorrect assumption that observations in this limited subset of cases demonstrate not just correlation, but causation. Hollnagel terms this the causality credo, which he defines as “the belief that adverse outcomes happen because something goes wrong, hence... they have causes that can be found and treated.”4 Since the Safety I perspective defines safety by what it is not, it loses the opportunity to consider the majority of cases in which expected and desired outcomes occur. To take this a step further, since safety is measured by lack of detection, decreasing detection assumes increased safety.4,5 As a result, little effort or infrastructure has been devoted to detecting or learning from cases wherein desired outcomes were achieved, and attempts to measure daily care resulting in desired outcomes poses logistical challenges. This framework is perpetuated by requirements imposed by hospital, society, and governing bodies for detecting and reacting to specific undesirable outcomes, accompanied by the concurrent lack of requirements for the evaluating or facilitating desirable outcomes.

Accident Assessment and Understanding of Systems

In the Safety I approach, accidents and risks are caused by failures and malfunctions, and the purpose of investigation is to “find and fix” the causes. 4,5,19 To understand this approach better, we must examine the Safety I perspective of systems and their functionality.

Safety I assumes that systems are decomposable and bimodal.4 Decomposable means that component parts comprising a system, if functioning as desired individually, will result in a desired outcome as a whole. This perspective is widely applied to understanding systems of care and reviewing cases. While the malfunction of telemetry monitors may be reasonably approached as a decomposable system, applying the same reasoning to increasingly complex sociotechnical systems results in oversimplification and lost opportunities for improved understanding. Similarly, we see this logic applied to case reviews utilizing tools such as root cause analysis.20,21 Once an undesirable outcome is detected, reviewers dissect the case, isolating individual parts until a faulty part is found. This approach to case review also highlights the assumption of bimodality. Safety I assumes that systems resulting in success and systems resulting in failure function via two distinct modes: either correctly or incorrectly.4,5,19 As such, incorrectly functioning components (including people), must be addressed to transition to correct functionality.

View of the Human Factor

Safety I assumes that systems are well designed, well understood, reliable, and safe. When our systems do not work, the cause is predominantly viewed as human error or failure to follow procedures.4,5,19,20 As such, humans and their inherent variability are viewed as introducing liability or hazard into an otherwise trustworthy system. As a result, ‘human error’ has emerged as a nearly ubiquitous contributing factor to undesirable outcomes as determined by investigative methods such as root cause analysis and/or fish bone diagrams.20,21 As a result, safety improvement efforts have focused on constraining human behaviors to fit a predictable and prescribed practice pattern, consistent with the perspective that human behavior is another component piece of a decomposable system. 4,5,19

Safety II

Safety II has emerged as a perspective to understand increasingly complex sociotechnical systems as a whole. Safety I may have been insufficient to adequately address patient safety, and even more seriously, may impede patient safety via unintended impacts on the poorly understood yet ubiquitous functions that help achieve desired outcomes most of the time.4,19,22

Definition of Safety and Safety Management Principle

The Safety II perspective defines safety by as many things going right as possible. By adjusting the definition of safety to focus on desired outcomes, Safety II highlights the fact that “given the uncertainty, intractability, and complexity of healthcare work, the surprise is not that things occasionally go wrong, but that they go right so often.”4 Safety is defined by what happens when it is present and allows for learning from the vast majority of cases wherein desired outcomes are achieved. In this way, understanding is not limited to what detracts from safety, but also encompasses what facilitates safety and allows for desired outcomes. Further, this definition allows for a proactive perspective that continuously anticipates new developments and events. Rather than reacting retrospectively to what has gone wrong, Safety II attempts to understand the interaction of systems and behaviors to facilitate desired outcomes rather than only preventing undesired outcomes.4,5,19,22

Accident Assessment and Understanding of Systems

The Safety II approach to accident investigation recognizes that care delivery happens similarly regardless of outcome.4,19 This approach overcomes hindsight and outcomes biases, and the causality credo mentioned above, by recognizing that the same systems and behaviors present in cases with undesired outcomes are also present in cases with desired outcomes. In this way, an investigation proposes to understand how things usually go right as a basis for explaining how things occasionally go wrong. 4,5,19

Safety II suggests that systems are increasingly intractable, and their component parts are increasingly interdependent. As such, systems cannot be decomposed into component parts in a meaningful way.4,19 Further, sociotechnical systems are not bimodal but are dynamic, meaning that before any contributing factor to an undesirable outcome is identified and examined, it may have already changed. Accordingly, failures may be best understood as resulting from a coupling of concurrent factors and their interactions. These couplings and interactions are dynamic, ever-changing, and therefore difficult to examine or prescribe.4,5,19 Given the dynamic nature of the complex systems of healthcare delivery, no two scenarios are identical, even when considering well studied and protocolized phenomena. This perspective supports interpreting components or functions in the context of outcomes. Observed outcomes may not be attributed to discrete contributing factors, but rather remain inexplicable due to unknown or transient phenomena within the sociotechnical system.4,19 In this way, every individual part of a system may function correctly but undesired outcomes may still result, and component pieces may malfunction but desired outcomes may still be achieved.

View of the Human Factor

A Safety II approach suggests that because of the dynamic interactions and interdependencies of numerous sociotechnical system factors, flexibility and adjustments are not only ubiquitous, but necessary for system functionality and achieving desired outcomes.4,19 In a Safety II perspective, people are not the problem, they are the adaptive solution. Further, since adjustments and adaptations are necessary to achieve desired outcomes, attempts to improve safety by constraining performance variability may have the unintended consequence of limiting the ability to achieve desired outcomes.4,5,19 A Safety II approach would suggest understanding the functions and behaviors that result in desired outcomes most of the time and proactively facilitating dynamic pieces rather than retrospectively attempting to mitigate undesired outcomes by constraining adaptive behaviors into prescribed patterns. 4,5,19,22

Applications

The Safety I approach is largely employed across most healthcare organizations, but components of Safety II thinking are gaining traction, offering opportunities for application and early adoption.23

Work as Done and Work as Imagined

“Work-as-Done” and “Work-as-Imagined” are just two of multiple archetypes of work.24 To illustrate these concepts, consider how many steps are involved in a nurse administering a medication in the inpatient setting. Common responses are five to twelve steps.25,26 These assessments of the work of medication administration reflect what we imagine nurses do based on experience and perceptions. In contrast, in a federally funded (ARHQ1 R01 HS013610) study using direct observation of the nursing work in medication administration found that observable steps resulted in a document over 40 pages long (BT Karsh, personal communication, November 2007). Importantly, the observed work did not capture the mental work of the bedside nurses. The observed 40 plus pages of medication administration work approaches“Work-as-Done” and is in stark contrast to the imagined five to twelve steps.

“Work-as-imagined" describes what is expected to happen under anticipated normal working conditions, not considering how work must be adapted to meet dynamic needs. “Work-as-done" describes what actually happens within complex, intractable, and dynamic systems.4,27 “Work-as-imagined" has its roots in Taylor’s Scientific Management Theory, which argued that “work-as-imagined" provides the essential and requisite model for “work-as-done.”4 That is, if “work-as-done" could be approximated to “work-as-imagined,” adverse events could be avoided, consistent with the Safety I approach to mitigating undesired outcomes by constraining behaviors to predictable and prescribed patterns. However, with increasing system complexity, “work-as-imagined" and “work-as-done" become increasingly disparate. A Safety II approach emphasizes the importance of understanding “work-as-done.” Because the adaptive behaviors that comprise “work-as-done" are necessary to achieving desired outcomes in an intractable and dynamic system, we must endeavor to understand “work-as-done" rather than approximating it to “work-as-imagined.”4,5,19,28

An important first step for anyone attempting to incorporate Safety II perspectives into patient safety operations at their organization is to better understand “work-as-done" resulting in expected and acceptable outcomes. These events, while anticipated and frequent, have the opportunity to illustrate existing gaps between “work-as-imagined" and “work-as-done.”4,5,19

Safety II as an Investment

Maximizing things that go right is an investment in safety and productivity. Through increased sensitivity to gaps between “work-as-imagined" and “work-as-done" during frequent, small-scale events in which desired outcomes are achieved, one may increase learning and subsequent impact (as opposed to the traditional approach of conserving resources by focusing on rare but high harm events.4,29 Upfront investment in understanding “work-as-done" can result in downstream effectiveness in learning about and supporting systems that function effectively. The common misunderstanding that Safety II approaches are more expensive and require more resources than Safety I stems from the misconception that Safety II requires that everything that goes well must be analyzed. Instead, Safety II simply suggests selecting events based on frequency rather than severity. In this way, similar efforts and resource utilization to understand a system and its functions may be applied to proportionally more cases and result in proportionally greater impact.29

Misconceptions and Challenges

Safety I and Safety II: Both/And vs. Either/or

Hollnagel does not advocate for replacing Safety I with Safety II, but instead combining these two ways of thinking.4,29 Many events may remain sufficiently addressed by Safety I approaches. Nonetheless, a growing number of cases require a new way of thinking about safety. Sometimes, methods utilized historically in a Safety I paradigm may also be utilized but viewed through a Safety II lens. A Safety II approach may also require new methods and tools to understand how things work, depending on the sociotechnical system and circumstances. A major misconception is that Safety I and Safety II are mutually exclusive,29 but this is simply not the case.4,5,29

Safety II is Not About Great Saves

Safety II does not focus on undesired outcomes, nor does it focus on exceptional outcomes. Similar to undesired outcomes, great saves are easy to see, complicated, and difficult to change. Rather than focusing on either end of this spectrum, a Safety II approach focuses on the middle, emphasizing better understanding of the daily sociotechnical systems, functions, and interactions that achieve desired outcomes most of the time.4,5 This approach is inherently challenging because expected daily events are difficult to detect and measure, and are largely ignored. Nonetheless, because these events are frequent, if we can understand how and why they happen, we can learn about performance adjustments and detect existing gaps between “work-as-imagined" and “work-as-done.”

Safety I and Safety II Are Not Methods

Safety I and Safety II are perspectives on how to think about systems.4 Similar to the analogy of two people standing on opposite sides of a tree and describing it, Safety I and Safety II examine the same sociotechnical system using different outlooks. Therefore, Safety II cannot be employed as a methodology to mitigate error or even reduce harm: it is a way of thinking and a paradigm through which to understand the workings of complex systems rather than an error or harm mitigation tool.

Additionally, it should be recognized that Hollangel, the main proponent of Safety I and II, is one of many safety scientists who have discounted the value of error as a target of safety efforts. 30,31,32 As human error is a social construct and a post hoc determination, assigning it value as a cause limits both learning and opportunities for improvement.

Matthew C Scanlon, MD, MS
Professor of Pediatrics, Critical Care Medicine
Medical College of Wisconsin
Mscanlon@mcw.edu

Nancy Jacobson, MD
Assistant Professor of Emergency Medicine
Medical College of Wisconsin
Njacobson@mcw.edu

 

References

  1. Le Coze JC. The “new view” of human error. Origins, ambiguities, successes and critiques. Safety Sci. 2022;154(154):105853. [Available at]
  2. Senders JW. Medical devices, medical errors, and medical accidents. In: Human Error in Medicine. CRC Press. 2018:159-177.
  3. Dekker S. Foundations of Safety Science: A Century of Understanding Accidents and Disasters. CRC Press; 2019.
  4. Hollnagel E, Wears RL, Braithwaite J. From Safety-I to Safety-II: A White Paper. Middelfart, Denmark: Resilient Health Care Net; 2015. [Free full text]
  5. Hollnagel E. Safety-I and Safety-II: The Past and Future of Safety Management. Ashgate; 2014.
  6. Patient safety 101. Primer. Patient Safety Network  website. - Agency for Healthcare Research and Quality . September 7, 2019. Accessed January 23, 2025. [Free full text]
  7. Foster C, Doud L, Palangyo T, et al. Healthcare worker serious safety events: applying concepts from patient safety to improve healthcare worker safety. Pediatr Qual Saf. 2021;6(4):e434. [Free full text]
  8. Zipperer L, ed. Patient Safety: Perspectives on Evidence, Information and Knowledge Transfer. Gower Publishing; 2014.
  9. Eichhorn JH. The APSF at 25: Pioneering success in safety, but challenges remain. Anaesthesia Patient Safety Foundation Newsletter. 2010;25(2):21–44. Accessed January 23, 2025. [Free full text]
  10. Leape LL. Error in medicine. JAMA. 1994;272(23):1851-1857. [Free full text]
  11. Wears R, Sutcliffe K. 'Reflection', In: Still Not Safe: Patient Safety and the Middle-Managing of American Medicine. Oxford University Press; 2019:193-210. [Free full text]
  12. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. [Free full text]
  13. Reason J. The nature of error. In: Human Error. Cambridge University Press; 1990:1-18.
  14. Le Coze JC. Reflecting on Jens Rasmussen’s legacy. A strong program for a hard problem. Saf Science. 71(b):123-141. [Free full text]
  15. Hollnagel E, Woods DD. Cognitive systems engineering: new wine in new bottles. Int J Hum Comput Stud. 1999;51(2):339-356. [Available at]
  16. Woods DD. The theory of graceful extensibility: Basic rules that govern adaptive systems. Environ Syst Dec. 2018;38(4):433-457. [Free full text]
  17. Dekker S. Safety Differently: Human Factors for a New Era. 2nd ed. CRC Press. 2014.
  18. Zavaglia S. The 5 Principles of Human Performance: a contemporary update of the building blocks of human performance for the new view of safety. Prof Saf. 2023;68(7):35.
  19. Hollnagel E. Safety-II in Practice: Developing the Resilience Potentials. Routledge. 2017.
  20. Root cause analysis. Primer. Patient Safety Network website. Agency for Healthcare Research and Quality. September 7, 2019. Accessed January 23, 2025 [Free full text]
  21. Shaikh U. Strategies and approaches for investigating patient safety events. Primer. Patient Safety Network website. Agency for Healthcare Research and Quality March 30, 2022. Accessed January 23, 2025. [Free full text]
  22. Hollnagel E. The pragmatic and the academic view on expert systems. Expert Syst Appl. 1991;3:179-185.
  23. Venkatesan C, Helak K, Sousane Z, et al. Application of safety-II principles. Perspective. Patient Safety Network website. Agency for Healthcare Research and Quality. August 28, 2024. Accessed January 23, 2025. [Free full text]
  24. Shorrock S. Humanistic Systems website. December 5, 2023. Accessed January 23, 2025. [Free full text]
  25. Doyle GR, McCutcheon JA. Non-Parenteral Medication Administration. Chapter in: Clinical Procedures for Safer Patient CarePressbooks. 2015:339-423 Accessed January 23, 2025 [Free full text]
  26.  Step-by-Step guide to administering medications. Unitek College. October 17, 2022. Accessed January 23, 2025. [Free full text]
  27. Hollnagel E. Prologue: Why do our expectations of how work should be done never correspond exactly to how work is done? In: Braithwaite J, Wears RL, Hollnagel E, editors. Resilient Health Care. Vol. 3. Reconciling work-as-imagined and work-as-done. CRC Press; 2017;xvii-xxv.
  28. Hollnagel E, Braithwaite J, Wears RL. (eds.). Delivering Resilient Health Care. Routledge. 2018.
  29. Safeguard. May-June 2019. Accessed September 30, 2024. [Free full text]
  30. Hollnagel E, Amalberti R. The Emperor’s New Clothes: Or Whatever Happened To “Human Error”? Chapter In: Dekker SWA, ed. Proceedings of the 4th International Workshop on Human Error, Safety and Systems Development. Linköping University; 2001.
  31. Conklin T. Pre-Accident Investigations: Better Questions-An Applied Approach to Operational Learning: CRC Press; 2016.
  32. Read GJ, Shorrock S, Walker GH, et al. State of science: evolving perspectives on ‘human error’. Ergonomics. 2021;64(9):1091-1114. [Free full text]

 

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print Download PDF
Cite
Citation

Scanlon M, Jacobson N. Safety I, Safety II, and the New Views of Safety. PSNet [internet]. Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2025.

Related Glossary Term(s)