Glossary
Definitions abound in the medical error and patient safety literature, with subtle and not-so-subtle variations in the meanings of important terms. This glossary aims to provide the most straightforward terminology, with definitions that encourage their distinct application in patient safety.
Accountability in healthcare represents the procedures and processes by which one party justifies and takes responsibility for its activities. This can be both at an individual and organizational level. Individuals must be held accountable for their actions, but organizations also play a role and must also be accountable for their structures and systems.
Organizational accountability is dependent upon a safety culture that accepts that adverse events and medical errors should lead to organizational learning (as opposed to a punitive culture in which an organization places blame on individuals rather than making systematic changes based on learning from the errors). When individual errors are made, it is important that organizations separate human behaviors that fall into the reckless (conscious disregard) and intentional categories versus behaviors that reflect human error (e.g., should have done something different) or negligence (failure to exercise expected care). Human error and negligent errors are typically the result of lack of knowledge, misremembering, or misplaced priorities.
The terms active and latent as applied to errors were coined by Reason. Active errors occur at the point of contact between a human and some aspect of a larger system (e.g., a human machine interface). They are generally readily apparent (e.g., pushing an incorrect button, ignoring a warning light) and almost always involve someone at the frontline. Active failures are sometimes referred to as errors at the sharp end, figuratively referring to a scalpel. In other words, errors at the sharp end are noticed first because they are committed by the person closest to the patient. This person may literally be holding a scalpel (e.g., an orthopedist operating on the wrong leg) or figuratively be administering any kind of therapy (e.g., a nurse programming an intravenous pump) or performing any aspect of care. Latent errors (or latent conditions), in contrast, refer to less apparent failures of organization or design that contributed to the occurrence of errors or allowed them to cause harm to patients. To complete the metaphor, latent errors are those at the other end of the scalpel, the blunt end referring to the many layers of the health care system that affect the person "holding" the scalpel.
See Primer. An adverse event (i.e., injury resulting from medical care) involving medication use.
Examples:
- anaphylaxis to penicillin
- major hemorrhage from heparin
- aminoglycoside-induced renal failure
- agranulocytosis from chloramphenicol
As with the more general term adverse event, the occurrence of an ADE does not necessarily indicate an error or poor quality of care. ADEs that involve an element of error (either of omission or commission) are often referred to as preventable ADEs. Medication errors that reached the patient but by good fortune did not cause any harm are often called potential ADEs. For instance, a serious allergic reaction to penicillin in a patient with no prior such history is an ADE, but so is the same reaction in a patient who has a known allergy history but receives penicillin due to a prescribing oversight. The former occurrence would count as an adverse drug reaction or non-preventable ADE, while the latter would represent a preventable ADE. If a patient with a documented serious penicillin allergy received a penicillin-like antibiotic but happened not to react to it, this event would be characterized as a potential ADE.
An ameliorable ADE is one in which the patient experienced harm from a medication that, while not completely preventable, could have been mitigated. For instance, a patient taking a cholesterol-lowering agent (statin) may develop muscle pains and eventually progress to a more serious condition called rhabdomyolysis. Failure to periodically check a blood test that assesses muscle damage or failure to recognize this possible diagnosis in a patient taking statins who subsequently develops rhabdomyolysis would make this event an ameliorable ADE: harm from medical care that could have been lessened with earlier, appropriate management. Again, the initial development of some problem was not preventable, but the eventual harm that occurred need not have been so severe, hence the term ameliorable ADE.
Related Resources (1)
Adverse effect produced by the use of a medication in the recommended manner i.e., a drug side effect. These effects range from nuisance effects (e.g., dry mouth with anticholinergic medications) to severe reactions, such as anaphylaxis to penicillin. Adverse drug reactions represent a subset of the broad category of adverse drug events specifically, they are non-preventable ADEs.
See Primer. Any injury caused by medical care.
Examples:
- pneumothorax from central venous catheter placement
- anaphylaxis to penicillin
- postoperative wound infection
- hospital-acquired delirium (or "sundowning") in elderly patients
Identifying something as an adverse event does not imply "error," "negligence," or poor quality care. It simply indicates that an undesirable clinical outcome resulted from some aspect of diagnosis or therapy, not an underlying disease process. Thus, pneumothorax from central venous catheter placement counts as an adverse event regardless of insertion technique. Similarly, postoperative wound infections count as adverse events even if the operation proceeded with optimal adherence to sterile procedures, the patient received appropriate antibiotic prophylaxis in the perioperative setting, and so on. (See also iatrogenic).
Related Resources (1)
See Primer. Being discharged from the hospital can be dangerous for patients. Nearly 20% of patients experience an adverse event in the first 3 weeks after discharge, including medication errors, health care associated infections, and procedural complications.
Related Resources (1)
See Primer. Computerized warnings and alarms are used to improve safety by alerting clinicians of potentially unsafe situations. However, this proliferation of alerts may have negative implications for patient safety as well.
Related Resources (1)
Beers criteria define medications that generally should be avoided in ambulatory elderly patients, doses or frequencies of administration that should not be exceeded, and medications that should be avoided in older persons known to have any of several common conditions. The criteria were originally developed using a formal consensus process for combining reviews of the evidence with expert input. The criteria for inappropriate use address commonly used categories of medications such as sedative-hypnotics, antidepressants, antipsychotics, antihypertensives, nonsteroidal anti-inflammatory agents, oral hypoglycemics, analgesics, dementia treatments, platelet inhibitors, histamine-2 blockers, antibiotics, decongestants, iron supplements, muscle relaxants, gastrointestinal antispasmodics, and antiemetics. The criteria were intended to guide clinical practice, but also to inform quality assurance review and health services research.
Most would agree that prescriptions for medications deemed inappropriate according to Beers criteria represent poor quality care. Unfortunately, harm does not only occur from receipt of these inappropriately prescribed medications. In one comprehensive national study of medication-related emergency department visits for elderly patients, most problems involved common and important medications not considered inappropriate according to the Beers criteria principally, oral anticoagulants (e.g., warfarin), antidiabetic agents (e.g., insulin), and antiplatelet agents (aspirin and clopidogrel.
Best practices in health care as considered the ‘best way’ to identify, collect, evaluate, and disseminate information; implement practices; and/or and monitor the outcomes of health care interventions for patients or population groups with defined indications or conditions. The term “best practices” is somewhat controversial, as some “best practices” may not be supported by rigorous evidence. Therefore, there has been a transition to using “evidence-based practice” or the “best available evidence” to demonstrate that the practice is grounded in empirical research. Examples of evidence-based best practices include surgical pre-op checklists, sepsis bundles, and reducing the use of indwelling catheters.
The blunt end refers to the many layers of the health care system not in direct contact with patients, but which influence the personnel and equipment at the sharp end who do contact patients. The blunt end thus consists of those who set policy, manage health care institutions, and design medical devices, and other people and forces, which, though removed in time and space from direct patient care, nonetheless affect how care is delivered. Thus, an error programming an intravenous pump would represent a problem at the sharp end, while the institution's decision to use multiple different types of infusion pumps, making programming errors more likely, would represent a problem at the blunt end. The terminology of "sharp" and "blunt" ends corresponds roughly to active failures and latent conditions.
A bundle is a set of evidence-based interventions that, when performed consistently and reliably, has been shown to improve outcomes and safety in health care. A bundle is typically comprised of a small number of clinical practices (usually 3-5) which are supported by scientifically robust clinical evidence that are all performed cohesively for maximal impact. Examples include bundles to improve maternal care and timing identification and treatment of sepsis.
See Primer. Burnout is a syndrome of emotional exhaustion, depersonalization, and decreased sense of accomplishment at work that results in overwhelming symptoms of fatigue, exhaustion, cynical detachment, and feelings of ineffectiveness. Burnout among health care professionals is widely understood as an organizational problem in health care that needs to be addressed and has been associated with increased patient safety incidents, including medical errors, reduced patient satisfaction, and poorer safety and quality ratings.
Related Resources (1)
See Primer. Though a seemingly simple intervention, checklists have played a leading role in the most significant successes of the patient safety movement, including the near-elimination of central line associated bloodstream infections in many intensive care units.
Related Resources (1)
See Primer. Any system designed to improve clinical decision-making related to diagnostic or therapeutic processes of care. Typically a decision support system responds to "triggers" or "flags" specific diagnoses, laboratory results, medication choices, or complex combinations of such parameters and provides information or recommendations directly relevant to a specific patient encounter.
CDSSs address activities ranging from the selection of drugs (e.g., the optimal antibiotic choice given specific microbiologic data) or diagnostic tests to detailed support for optimal drug dosing and support for resolving diagnostic dilemmas. Structured antibiotic order forms represent a common example of paper-based CDSSs. Although such systems are still commonly encountered, many people equate CDSSs with computerized systems in which software algorithms generate patient-specific recommendations by matching characteristics, such as age, renal function, or allergy history, with rules in a computerized knowledge base.
The distinction between decision support and simple reminders can be unclear, but usually reminder systems are included as decision support if they involve patient-specific information. For instance, a generic reminder (e.g., "Did you obtain an allergy history?") would not be considered decision support, but a warning (e.g., "This patient is allergic to codeine.") that appears at the time of entering an order for codeine would be. A recent systematic review estimated the pooled effects for simple computer reminders and more complex decision support provided at the point of care (i.e., as clinicians entered orders in computerized provider order entry systems or performed clinical documentation in electronic medical records).
Related Resources (1)
An event or situation that did not produce patient injury, but only because of chance. This good fortune might reflect robustness of the patient (e.g., a patient with penicillin allergy receives penicillin, but has no reaction) or a fortuitous, timely intervention (e.g., a nurse happens to realize that a physician wrote an order in the wrong chart). Such events have also been termed near miss incidents.
Closed loop communication consists of exchanging clear, concise information, and acknowledging receipt of the information to confirm its understanding. The communication is addressed to a specific person on the clinical team by name and the recipient repeats the message back to the sender. Such communication enhances patient safety by preventing confusion, ensuring that teams operate under a shared mental model, and that a specific person is responsible for completing the task.
Cognitive biases are ways in which a particular person understands events, facts, and other people based on their own set of beliefs and experiences, which may or may not be reasonable or accurate. People are often unaware of the influence of their cognitive biases. Examples of common cognitive biases include:
- Confirmation bias (e.g., neglecting evidence that goes against your belief); anchoring bias (prioritizing information/data that supports one’s initial impressions);
- Framing bias (the manner by which data are presented);
- Authority bias (when a higher authority provides information);
- Affect heuristic (when actions are swayed by emotion versus rational decisions).
Cognitive bias impacts patient safety in a variety of ways. For example, cognitive biases can lead to diagnostic errors because they disrupt physicians’ and advanced practice providers’ processes to gather and interpret evidence and take appropriate actions. Authority bias is common in healthcare; for example, nurses tend to accept opinions of physicians on face value.
Related terms: Confirmation bias, availability bias, rule of thumb
Communication (disclosure) and resolution programs (CRPs) emphasize early admission of adverse events and proactive approaches to resolving patient safety issues. CRPs offer patients empathetic treatment and care after adverse events, even when no harm occurs. These programs focus on transparency, recognizing accountability, acting in a fair, just manner; the use and sustainability of practices to enhance patient safety; and changing disclosure communications to be truly transparent. The CANDOR toolkit, developed by AHRQ, provides organizations with tools necessary to implement a CRP. Whereas the historical approach in response to unexpected harm often followed a "deny-and-defend" strategy (e.g., providing limited information to patients and families, avoiding admission of fault), the CANDOR toolkit uses a person-centered approach and promotes greater transparency and early sharing of errors with patients and families.
Related term: Transparency
Compassion fatigue refers to the physical and mental exhaustion and emotional withdrawal experienced by individuals who care for sick or traumatized people over an extended period. Compassion fatigue can decrease effective teamwork behaviors and increase secondary stress, burnout, depression, or anxiety as well as escalating the use of negative coping behaviors – all of which may have a negative impact on patient safety, as these healthcare workers may commit more errors.
Related term: Burnout
Complexity theory differs importantly from systems thinking in its emphasis on the interaction between local systems and their environment (such as the larger system in which a given hospital or clinic operates). It is often tempting to ignore the larger environment as unchangeable and therefore outside the scope of quality improvement or patient safety activities. According to complexity theory, however, behavior within a hospital or clinic (e.g., non-compliance with a national practice guideline) can often be understood only by identifying interactions between local attributes and environmental factors.
See Primer. Computerized provider order entry systems ensure standardized, legible, and complete orders, and especially when paired with decision support systems have the potential to sharply reduce medication prescribing errors.
Related Resources (1)
The tendency to focus on evidence that supports a working hypothesis, such as a diagnosis in clinical medicine, rather than to look for evidence that refutes it or provides greater support to an alternative diagnosis. Suppose that a 65-year-old man with a past history of angina presents to the emergency department with acute onset of shortness of breath. The physician immediately considers the possibility of cardiac ischemia, so asks the patient if he has experienced any chest pain. The patient replies affirmatively. Because the physician perceives this answer as confirming his working diagnosis, he does not ask if the chest pain was pleuritic in nature, which would decrease the likelihood of an acute coronary syndrome and increase the likelihood of pulmonary embolism (a reasonable alternative diagnosis for acute shortness of breath accompanied by chest pain). The physician then orders an EKG and cardiac troponin. The EKG shows nonspecific ST changes and the troponin returns slightly elevated.
Of course, ordering an EKG and testing cardiac enzymes is appropriate in the work-up of acute shortness of breath, especially when it is accompanied by chest pain and in a patient with known angina. The problem is that these tests may be misleading, since positive results are consistent not only with acute coronary syndrome but also with pulmonary embolism. To avoid confirmation in this case, the physician might have obtained an arterial blood glass or a D-dimer level. Abnormal results for either of these tests would be relatively unlikely to occur in a patient with an acute coronary syndrome (unless complicated by pulmonary edema), but likely to occur with pulmonary embolism. These results could be followed up by more direct testing for pulmonary embolism (e.g., with a helical CT scan of the chest), whereas normal results would allow the clinician to proceed with greater confidence down the road of investigating and managing cardiac ischemia.
This vignette was presented as if information were sought in sequence. In many cases, especially in acute care medicine, clinicians have the results of numerous tests in hand when they first meet a patient. The results of these tests often do not all suggest the same diagnosis. The appeal of accentuating confirmatory test results and ignoring nonconfirmatory ones is that it minimizes cognitive dissonance.
A related cognitive trap that may accompany confirmation bias and compound the possibility of error is "anchoring bias" the tendency to stick with one's first impressions, even in the face of significant disconfirming evidence.
Crisis management is the process by which a team or organization deals with a major event that threatens to harm the organization, its stakeholders, or the general public. Examples of events that may require crisis management include significant adverse events (death of a patient due to a medical error) or a significant environmental event such as a fire. The COVID-19 pandemic is also an example – a public health emergency requiring crisis management early in the event.
A term made famous by a classic human factors study by Cooper of "anesthetic mishaps," though the term had first been coined in the 1950s. Cooper and colleagues brought the technique of critical incident analysis to a wide audience in health care but followed the definition of the originator of the technique. They defined critical incidents as occurrences that are "significant or pivotal, in either a desirable or an undesirable way," though Cooper and colleagues (and most others since) chose to focus on incidents that had potentially undesirable consequences. This concept is best understood in the context of the type of investigation that follows, which is very much in the style of root cause analysis. Thus, significant or pivotal means that there was significant potential for harm (or actual harm), but also that the event has the potential to reveal important hazards in the organization. In many ways, it is the spirit of the expression in quality improvement circles, "every defect is a treasure." In other words, these incidents, whether near misses or disasters in which significant harm occurred, provide valuable opportunities to learn about individual and organizational factors that can be remedied to prevent similar incidents in the future.
Cultural competence includes individual attitudes and behaviors and refers to one’s capacity to appreciate, respect, and interact with members of a different social or cultural group. In healthcare, it includes the ability to provide culturally sensitive care to individuals. To provide person-centered, high quality, and safe care, health care professionals must be prepared to tailor care to prevent adverse events or harm to individual patients from different groups (e.g., race, ethnicity, gender, language, religion, social status). Research has shown that health literacy, English proficiency, lack of trust, and other cultural issues can lead to adverse events, particularly medication errors. Other terms that have been associated with cultural competence include cultural intelligence (knowledge about various cultures and their social context) and cultural humility, both of which assume an approach to care where the provider is sensitive to the cultural context of patients and avoids making assumptions about the patient’s beliefs and environment.
See Primer. Debriefing is a brief, planned, and non-threatening conversation that is conducted to review a procedure or event. The goal is to get individuals involved together right after the procedure or event to discuss what went well and to identify areas for improvement. A debrief can help obtain new information after patient safety events such as near misses, adverse events, or medical errors.
Related Resources (1)
Typically a decision support system responds to "triggers" or "flags"—specific diagnoses, laboratory results, medication choices, or complex combinations of such parameters—and provides information or recommendations directly relevant to a specific patient encounter. For instance, ordering an aminoglycoside for a patient with creatinine above a certain value might trigger a message suggesting a dose adjustment based on the patient’s decreased renal function.
See Primer. Deprescribing is the process of supervised medication discontinuation or dose reduction to reduce potentially inappropriate medication (PIM) use. Deprescribing is one intervention that can be applied to reduce the risk for adverse drug events (ADEs) or medication errors associated with polypharmacy.
Related Resources (1)
See Primer. Thousands of patients die every year due to diagnostic errors. While clinicians' cognitive biases play a role in many diagnostic errors, underlying health care system problems also contribute to missed and delayed diagnoses.
Related Resources (1)
See Primer. Many victims of medical errors never learn of the mistake, because the error is simply not disclosed. Physicians have traditionally shied away from discussing errors with patients, due to fear of precipitating a malpractice lawsuit and embarrassment and discomfort with the disclosure process.
Related Resources (1)
See Primer. Popular media often depicts physicians as brilliant, intimidating, and condescending in equal measures. This stereotype, though undoubtedly dramatic and even amusing, obscures the fact that disruptive and unprofessional behavior by clinicians poses a definite threat to patient safety.
Related Resources (1)
See Primer. Long and unpredictable work hours have been a staple of medical training for centuries. In 2003, the Accreditation Council for Graduate Medical Education (ACGME) implemented new rules limiting duty hours for all residents to reduce fatigue. The implementation of resident duty-hour restrictions has been controversial, as evidence regarding its impact on patient safety has been mixed.
Related Resources (1)
See Primer. Patient and caregiver engagement is centered on providers, patients, and caregivers working together to improve health. A patient’s greater engagement in healthcare contributes to improved health outcomes. Patients want to be engaged in their healthcare decision-making process, and those who are engaged as decision-makers in their own care tend to be healthier and experience better outcomes. Efforts to engage patients and caregivers in safety efforts have focused on three areas: enlisting patients and caregivers in detecting adverse events, empowering patients and caregivers to ensure safe care, and emphasizing patient and caregiver involvement as a means of improving the culture of safety.
Related Resources (1)
An act of commission (doing something wrong) or omission (failing to do the right thing) that leads to an undesirable outcome or significant potential for such an outcome. For instance, ordering a medication for a patient with a documented allergy to that medication would be an act of commission. Failing to prescribe a proven medication with major benefits for an eligible patient (e.g., low-dose unfractionated heparin as venous thromboembolism prophylaxis for a patient after hip replacement surgery) would represent an error of omission.
Errors of omission are more difficult to recognize than errors of commission but likely represent a larger problem. In other words, there are likely many more instances in which the provision of additional diagnostic, therapeutic, or preventive modalities would have improved care than there are instances in which the care provided quite literally should not have been provided. In many ways, this point echoes the generally agreed-upon view in the health care quality literature that underuse far exceeds overuse, even though the latter historically received greater attention. (See definition for Underuse, Overuse, Misuse.) In addition to commission vs. omission, three other dichotomies commonly appear in the literature on errors: active failures vs. latent conditions, errors at the sharp end vs. errors at the blunt end, and slips vs. mistakes.
Error chain generally refers to the series of events that led to a disastrous outcome, typically uncovered by a root cause analysis. Sometimes the chain metaphor carries the added sense of inexorability, as many of the causes are tightly coupled, such that one problem begets the next. A more specific meaning of error chain, especially when used in the phrase "break the error chain," relates to the common themes or categories of causes that emerge from root cause analyses. These categories go by different names in different settings, but they generally include (1) failure to follow standard operating procedures, (2) poor leadership, (3) breakdowns in communication or teamwork, (4) overlooking or ignoring individual fallibility, and (5) losing track of objectives. Used in this way, "break the error chain" is shorthand for an approach in which team members continually address these links as a crisis or routine situation unfolds. The checklists that are included in teamwork training programs have categories corresponding to these common links in the error chain (e.g., establish a team leader, assign roles and responsibilities, and monitor your teammates).
The concept of evidence-based treatments has particular relevance to patient safety, because many recommended methods for measuring and improving safety problems have been drawn from other high-risk industries, without any studies to confirm that these strategies work well in health care (or, in many cases, that they work well in the original industry). The lack of evidence supporting widely recommended (sometimes even mandated) patient safety practices contrasts sharply with the rest of clinical medicine. While individual practitioners may employ diagnostic tests or administer treatments of unproven value, professional organizations typically do not endorse such aspects of care until well-designed studies demonstrate that these diagnostic or treatment strategies confer net benefit to patients (i.e., until they become evidence-based). Certainly, diagnostic and therapeutic processes do not become standard of care or in any way mandated until they have undergone rigorous evaluation in well-designed studies.
In patient safety, by contrast, patient safety goals established at state and national levels (sometimes even mandated by regulatory agencies or by law) often reflect ideas that have undergone little or no empiric evaluation. Just as in clinical medicine, promising safety strategies sometimes can turn out to confer no benefit or even create new problems—hence the need for rigorous evaluations of candidate patient safety strategies just as in other areas of medicine. That said, just how high to set the bar for the evidence required to justify actively disseminating patient safety and quality improvement strategies is a subject that has received considerable attention in recent years. Some leading thinkers in patient safety argue that an evidence bar comparable to that used in more traditional clinical medicine would be too high, given the difficulty of studying complex social systems such as hospitals and clinics, and the high costs of studying interventions such as rapid response teams or computerized order entry.
Error analysis may involve retrospective investigations (as in Root Cause Analysis) or prospective attempts to predict "error modes." Different frameworks exist for predicting possible errors. One commonly used approach is failure mode and effect analysis (FMEA), in which the likelihood of a particular process failure is combined with an estimate of the relative impact of that error to produce a "criticality index." By combining the probability of failure with the consequences of failure, this index allows for the prioritization of specific processes as quality improvement targets. For instance, an FMEA analysis of the medication dispensing process on a general hospital ward might break down all steps from receipt of orders in the central pharmacy to filling automated dispensing machines by pharmacy technicians. Each step in this process would be assigned a probability of failure and an impact score, so that all steps could be ranked according to the product of these two numbers. Steps ranked at the top (ie, those with the highest "criticality indices") would be prioritized for error proofing.
A common process used to prospectively identify error risk within a particular process. FMEA begins with a complete process mapping that identifies all the steps that must occur for a given process to occur (e.g., programming an infusion pump or preparing an intravenous medication in the pharmacy). With the process mapped out, the FMEA then continues by identifying the ways in which each step can go wrong (i.e., the failure modes for each step), the probability that each error will be detected (i.e., so that it can be corrected before causing harm), and the consequences or impact of the error not being detected. The estimates of the likelihood of a particular process failure, the chance of detecting such failure, and its impact are combined numerically to produce a criticality index.
This criticality index provides a rough quantitative estimate of the magnitude of hazard posed by each step in a high-risk process. Assigning a criticality index to each step allows prioritization of targets for improvement. For instance, an FMEA analysis of the medication-dispensing process on a general hospital ward might break down all steps from receipt of orders in the central pharmacy to filling automated dispensing machines by pharmacy technicians. Each step in this process would be assigned a probability of failure and an impact score, so that all steps could be ranked according to the product of these two numbers. Steps ranked at the top (i.e., those with the highest criticality indices) would be prioritized for error proofing.
FMEA makes sense as a general approach and it (or similar prospective error-proofing techniques) has been used in other high-risk industries. However, the reliability of the technique is not clear. Different teams charged with analyzing the same process may identify different steps in the process, assign different risks to the steps, and consequently prioritize different targets for improvement.
See Primer. Failure to rescue is shorthand for failure to rescue (i.e., prevent a clinically important deterioration, such as death or permanent disability) from a complication of an underlying illness (e.g., cardiac arrest in a patient with acute myocardial infarction) or a complication of medical care (e.g., major hemorrhage after thrombolysis for acute myocardial infarction). Failure to rescue thus provides a measure of the degree to which providers responded to adverse occurrences (e.g., hospital-acquired infections, cardiac arrest or shock) that developed on their watch. It may reflect the quality of monitoring, the effectiveness of actions taken once early complications are recognized, or both.
The technical motivation for using failure to rescue to evaluate the quality of care stems from the concern that some institutions might document adverse occurrences more assiduously than other institutions. Therefore, using lower rates of in-hospital complications by themselves may simply reward hospitals with poor documentation. However, if the medical record indicates that a complication has occurred, the response to that complication should provide an indicator of the quality of care that is less susceptible to charting bias.
Related Resources (1)
See Primer. The process when one health care professional updates another on the status of one or more patients for the purpose of taking over their care. Typical examples involve a physician who has been on call overnight telling an incoming physician about patients she has admitted so he can continue with their ongoing management, know what immediate issues to watch out for, and so on. Nurses similarly conduct a handover at the end of their shift, updating their colleagues about the status of the patients under their care and tasks that need to be performed. When the outgoing nurses return for their next duty period, they will in turn receive new updates during the change of shift handover.
Handovers in care have always carried risks: a professional who spent hours assessing and managing a patient, upon completion of her work, provides a brief summary of the salient features of the case to an incoming professional who typically has other unfamiliar patients he must get to know. The summary may leave out key details due to oversight, exacerbated by an unstructured process and being rushed to finish work. Even structured, fairly thorough summaries during handovers may fail to capture nuances that could subsequently prove relevant.
In addition to handoffs between professionals working in the same clinical unit, shorter lengths of stay in hospitals and other occupancy issues have increased transitions between settings, with patients more often move from one ward to another or from one institution to another (e.g., from an acute care hospital to a rehabilitation facility or skilled nursing facility). Due to the increasing recognition of hazards associated with these transitions in care, the term "handovers" is often used to refer to the information transfer that occurs from one clinical setting to another (e.g., from hospital to nursing home) not just from one professional to another.
Related Resources (1)
See Primer. Broadly, harm refers to the impairment of the anatomy or physiology of the body and physical, social, or psychological issues arising from the impairment such as disease, disability, or death. In the context of patient safety, the term “adverse event” is used to describe harm to patients that is caused by medical care, as opposed to harm caused by underlying disease or disability. Adverse events can be preventable, ameliorable, or the result of negligence.
Related Resources (1)
See Primer. Although long accepted by clinicians as an inevitable hazard of hospitalization, recent efforts demonstrate that relatively simple measures can prevent the majority of health care associated infections. As a result, hospitals are under intense pressure to reduce the burden of these infections.
See Primer. Individuals' ability to find, process, and comprehend the basic health information necessary to act on medical instructions and make decisions about their health. Numerous studies have documented the degree to which numerous patients do not understand basic information or instructions related to general aspects of their medical care, their medications, and procedures they will undergo. The limited ability to comprehend medical instructions or information in some cases reflects obvious language barriers (e.g., reviewing medication instructions in English with a patient who speaks very little English), but the scope of the problem reflects broader issues related to levels of education, cross-cultural issues, and overuse of technical terminology by clinicians.
Related Resources (3)
Loosely defined or informal rules often arrived at through experience or trial and error that make assessments and decisions (e.g., gastrointestinal complaints that wake patients up at night are unlikely to be benign in nature). Heuristics provide cognitive shortcuts in the face of complex situations, and thus serve an important purpose. Unfortunately, they can also turn out to be wrong, with frequently used heuristics often forming the basis for the many cognitive biases, such as anchoring bias, availability bias, confirmation bias, and others, that have received attention in the literature on diagnostic errors and medical decision making.
See Primer. High reliability organizations refer to organizations or systems that operate in hazardous conditions but have fewer than their fair share of adverse events. Commonly discussed examples include air traffic control systems, nuclear power plants, and naval aircraft carriers. It is worth noting that, in the patient safety literature, HROs are considered to operate with nearly failure-free performance records, not simply better than average ones. This shift in meaning is somewhat understandable given that the failure rates in these other industries are so much lower than rates of errors and adverse events in health care. This comparison glosses over the difference in significance of a "failure" in the nuclear power industry compared with one in health care. The point remains, however, that some organizations achieve consistently safe and effective performance records despite unpredictable operating environments or intrinsically hazardous endeavors. Detailed case studies of specific HROs have identified some common features, which have been offered as models for other organizations to achieve substantial improvements in their safety records. These features include:
- Preoccupation with failure the acknowledgment of the high-risk, error-prone nature of an organization's activities and the determination to achieve consistently safe operations.
- Commitment to resilience the development of capacities to detect unexpected threats and contain them before they cause harm, or bounce back when they do.
- Sensitivity to operations an attentiveness to the issues facing workers at the frontline. This feature comes into play when conducting analyses of specific events (e.g., frontline workers play a crucial role in root cause analyses by bringing up unrecognized latent threats in current operating procedures), but also in connection with organizational decision-making, which is somewhat decentralized. Management units at the frontline are given some autonomy in identifying and responding to threats, rather than adopting a rigid top-down approach.
- A culture of safety, in which individuals feel comfortable drawing attention to potential hazards or actual failures without fear of censure from management.
Related Resources (1)
In the context of safety analysis, hindsight bias refers to the tendency to judge the events leading up to an accident as errors because the bad outcome is known. The more severe the outcome, the more likely that decisions leading up to this outcome will be judged as errors. Judging the antecedent decisions as errors implies that the outcome was preventable. In legal circles, one might use the phrase "but for," as in "but for these errors in judgment, this terrible outcome would not have occurred." Such judgments return us to the concept of "hindsight is 20/20." Those reviewing events after the fact see the outcome as more foreseeable and therefore more preventable than they would have appreciated in real time.
Human factors are the strengths and constraints in the design of interactive systems and actions involving people, tools and technology, and work environments to ensure their safety, reliability, and effectiveness. Ergonomics is a related term, which is the study of the interplay between human factors, technologies, and work environments.
Related term: human factors engineering
Related Resources (1)
See Primer. Human factors engineering is the discipline that attempts to identify and address safety problems that arise due to the interaction between people, technology, and work environments.
Related Resources (1)
Human-centered design is a problem-solving approach that focuses on developing and optimizing the efficiency, effectiveness, and usability of products and interactive systems, thereby increasing their safety. This approach prevents patient safety incidents by considering human capabilities, skills, limitations, and needs. Solutions are developed by involving end-user perspectives throughout the process.
An adverse effect of medical care, rather than of the underlying disease (literally "brought forth by healer," from Greek iatros, for healer, and gennan to bring forth); equivalent to adverse event.
Inattentional blindness is a cognition concept exploring why individuals in an intense or complex situation can miss an important event or data point because competing attentional tasks divide their focus. Individuals experiencing inattentional blindness unknowingly orient themselves toward, and process information from, only one part of their environment, while excluding others which can contribute to task omissions and missed signals, such as incorrect medication administration.
See Primer. Patient safety event reporting systems are ubiquitous in hospitals and are a mainstay of efforts to detect safety and quality problems. However, while event reports may highlight specific safety concerns, they do not provide insights into the epidemiology of safety problems.
Related Resources (1)
Legislation governing the requirements of, and conditions under which, consent must be obtained varies by jurisdiction. Most general guidelines require patients to be informed of the nature of their condition, the proposed procedure, the purpose of the procedure, the risks and benefits of the proposed treatments, the probability of the anticipated risks and benefits, alternatives to the treatment and their associated risks and benefits, and the risks and benefits of not receiving the treatment or procedure.
Although the goals of informed consent are irrefutable, consent is often obtained in a haphazard, pro forma fashion, with patients having little true understanding of procedures to which they have consented. Evidence suggests that asking patients to restate the essence of the informed consent improves the quality of these discussions and makes it more likely that the consent is truly informed.
Patient safety innovations are defined as “implementation of new or altered products, tools, services, processes, systems, policies, organizational structures, or business models implemented to improve or enhance quality of care and reduce harm. Patient safety Innovations may be local, regional, national, or international in scope and those included on the AHRQ PSNet Innovation Exchange have implementation data available demonstrating impact.
The phrase "just culture" was popularized in the patient safety lexicon by a report that outlined principles for achieving a culture in which frontline personnel feel comfortable disclosing errors including their own while maintaining professional accountability. The examples in the report relate to transfusion safety, but the principles clearly generalize across domains within health care organizations.
Traditionally, health care's culture has held individuals accountable for all errors or mishaps that befall patients under their care. By contrast, a just culture recognizes that individual practitioners should not be held accountable for system failings over which they have no control. A just culture also recognizes many individual or "active" errors represent predictable interactions between human operators and the systems in which they work. However, in contrast to a culture that touts "no blame" as its governing principle, a just culture does not tolerate conscious disregard of clear risks to patients or gross misconduct (e.g., falsifying a record, performing professional duties while intoxicated).
In summary, a just culture recognizes that competent professionals make mistakes and acknowledges that even competent professionals will develop unhealthy norms (shortcuts, "routine rule violations"), but has zero tolerance for reckless behavior.
The terms active and latent as applied to errors were coined by Reason. Latent errors (or latent conditions) refer to less apparent failures of organization or design that contributed to the occurrence of errors or allowed them to cause harm to patients. For instance, whereas the active failure in a particular adverse event may have been a mistake in programming an intravenous pump, a latent error might be that the institution uses multiple different types of infusion pumps, making programming errors more likely. Thus, latent errors are quite literally "accidents waiting to happen." Latent errors are sometimes referred to as errors at the blunt end, referring to the many layers of the health care system that affect the person "holding" the scalpel. Active failures, in contrast, are sometimes referred to as errors at the sharp end, or the personnel and parts of the health care system in direct contact with patients.
Lean principles include standardized work, value stream, workflow, reducing waste, and efficiency with a focus on the customer experience. Application of Lean principles to healthcare settings increases patient safety and ensures that the patient’s healthcare experience is effective and of high quality. Researchers have used Lean methodology to improve processes related to chemotherapy preparation, surgical instrument sterilization, and medication administration.
Learning systems build functions, networks, and processes to use data, information, evidence, and knowledge to implement change and, ultimately, to sustain improvements. Learning systems focus both on internal improvement and information sharing, as well as external distribution of data and knowledge using technology to generate improvement in the larger environment in which the organization functions. Learning systems nurture a culture that enables information sharing and improved collective awareness across the spectrum of the healthcare system.
Without taking anything away from the particular hospitals that have achieved Magnet status, the program as a whole has its critics. In fact, at least one state nurses' association (Massachusetts) has taken an official position critiquing the program, charging that its perpetuation reflects the financial interests of its sponsoring organization and the participating hospitals more than the goals of improving health care quality or improving working conditions for nurses. Regardless of the particulars of the Magnet Recognition Program and the lack of persuasive evidence linking magnet status to quality, to many the term magnet hospital connotes a hospital that delivers superior patient care and, partly on this basis, attracts and retains high-quality nurses.
See Primer. The concept of medical emergency teams (also known as rapid response teams) is that of a cardiac arrest team with more liberal calling criteria. Instead of just frank respiratory or cardiac arrest, medical emergency teams respond to a wide range of worrisome, acute changes in patients' clinical status, such as low blood pressure, difficulty breathing, or altered mental status. In addition to less stringent calling criteria, the concept of medical emergency teams de-emphasizes the traditional hierarchy in patient care in that anyone can initiate the call. Nurses, junior medical staff, or others involved in the care of patients can call for the assistance of the medical emergency team whenever they are worried about a patient's condition, without having to wait for more senior personnel to assess the patient and approve the decision to call for help.
Related Resources (1)
The Medication Administration Record (MAR) is a legal and permanent documentation of a patient’s medications administered, typically by a nurse in an acute or sub-acute setting. Use of technology (such as bar-coded medication administration) and standardized procedures (such as two-person verification or application of the “rights” of medication administration are included in the medication administration process to improve patient safety.
See Primer. Unintended inconsistencies in medication regimens occur with any transition in care. Medication reconciliation refers to the process of avoiding such inadvertent inconsistencies by reviewing the patient's current medication regimen and comparing it with the regimen being considered for the new setting of care.
Related Resources (1)
A medication safety officer is a clinical practitioner in a leadership role that has expertise in safe medication management practices across all stages of medication delivery. His or her leadership and expertise optimize best practices and address medication adverse events in a systems-based approach.
Mindfulness reflects an organizational and/or team ability to motivate and enculturate abilities and opportunities to create awareness of the myriad of facets affecting detection of potential or emergent situations before they unfold to prevent escalation into failure and provide understanding to coordinate a response during an incident. This can be accomplished through initiatives that involve multidisciplinary work and develop teams and relationships. The concept aligns with the core components of high reliability as defined by Weick/Sutcliff.
Related terms: high reliability organizations; situational awareness
See Primer. Misdiagnosis in the context of patient safety is an erroneous or delayed diagnosis and has the potential to cause patient harm. The term is frequently used interchangeably with "diagnostic error". Misdiagnoses can potentially prevent or delay appropriate treatment or result in unnecessary or harmful treatment, which can lead to physical, psychological, or financial harm to patients. Misdiagnosis can be caused by cognitive biases in clinicians or underlying systems-level issues in health care.
Related Resources (1)
See Primer. Missed care is a subset of the category known as “error of omission.” It refers to care that is delayed, partially completed, or not completed at all. Missed care can result in lower safety culture ratings, increases in adverse events such as pressure injuries, and higher rates of postoperative mortality.
Related Resources (1)
In some contexts, errors are dichotomized as slips or mistakes, based on the cognitive psychology of task-oriented behavior. Mistakes reflect failures during attentional behaviors; behavior that requires conscious thought, analysis, and planning, as in active problem solving. Rather than lapses in concentration (as with slips), mistakes typically involve insufficient knowledge, failure to correctly interpret available information, or application of the wrong cognitive heuristic or rule. Thus, choosing the wrong diagnostic test or ordering a suboptimal medication for a given condition represents a mistake. Mistakes often reflect lack of experience or insufficient training. Reducing the likelihood of mistakes typically requires more training, supervision, or occasionally disciplinary action (in the case of negligence).
Unfortunately, health care has typically responded to all errors as if they were mistakes, with remedial education and/or added layers of supervision. In point of fact, most errors are actually slips, which are failures of schematic behavior that occur due to fatigue, stress, or emotional distractions, and are prevented through sharply different mechanisms.
In healthcare, moral distress or moral injury occurs when a person knows the ethically appropriate action to take but is constrained from taking that action. The constraints can come from multiple external factors, but they can also come from institutional or organizational regulations that do not align with the person’s moral principles, or when the person feels powerless to act on their moral beliefs.
See Primer. An event or situation that did not produce patient injury, but only because of chance. This good fortune might reflect robustness of the patient (e.g., a patient with penicillin allergy receives penicillin, but has no reaction) or a fortuitous, timely intervention (e.g., a nurse happens to realize that a physician wrote an order in the wrong chart). This definition is identical to that for close call.
Related Resources (1)
See Primer. The list of never events has expanded over time to include adverse events that are unambiguous, serious, and usually preventable. While most are rare, when never events occur, they are devastating to patients and indicate serious underlying organizational safety problems.
Related Resources (2)
Though less often cited than high reliability theory in the health care literature, normal accident theory has played a prominent role in the study of complex organizations. In contrast to the optimism of high reliability theory, normal accident theory suggests that, at least in some settings, major accidents become inevitable and, thus, in a sense, "normal."
Perrow proposed two factors that create an environment in which a major accident becomes increasingly likely over time: complexity and tight coupling. The degree of complexity envisioned by Perrow occurs when no single operator can immediately foresee the consequences of a given action in the system. Tight coupling occurs when processes are intrinsically time-dependent once a process has been set in motion; it must be completed within a certain period of time. Importantly, normal accident theory contends that accidents become inevitable in complex, tightly coupled systems regardless of steps taken to increase safety. In fact, these steps sometimes increase the risk for future accidents through unintended collateral effects and general increases in system complexity.
Even if one does not believe the central contention of normal accident theory that the potential for catastrophe emerges as an intrinsic property of certain complex systems, analyses informed by this theory's perspective have offered some fascinating insights into possible failure modes for high-risk organizations, including hospitals.
Normalization of deviance was coined by Diane Vaughan in her book The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, in which she analyzes the interactions between various cultural forces within NASA that contributed to the Challenger disaster. Vaughn used this expression to describe the gradual shift in what is regarded as normal after repeated exposures to "deviant behavior" (behavior straying from correct [or safe] operating procedure). Corners get cut, safety checks bypassed, and alarms ignored or turned off, and these behaviors become normal not just common, but stripped of their significance as warnings of impending danger. In their discussion of a catastrophic error in health care, Chassin and Becher used the phrase "a culture of low expectations." When a system routinely produces errors (paperwork in the wrong chart, major miscommunications between different members of a given health care team, patients in the dark about important aspects of the care), providers in the system become inured to malfunction. In such a system, what should be regarded as a major warning of impending danger is ignored as a normal operating procedure.
The onion model illustrates the multiple levels or layers of protection (as in the layers of an onion) in a complex, high-risk system such as any health care setting. These layers include external regulations (e.g., related to staffing levels or required organizational practices, such as medication reconciliation), organizational features such as a just culture, equipment and technology (e.g., computerized order entry), and education and training of personnel. An illustration of a modified version of the onion model can be found here.
Organizational learning is an environmental state that ensures lessons from lived experience within a work environment are fed into, and embedded within, the organization’s policies and culture to ensure perpetual improvement. Activities supporting organizational learning include detection and reporting and discussion of safety issues by frontline staff, and promotion of experimentation and creative problem-solving in order to minimize the stigma of failures.
See Primer. Overdiagnosis involves identifying medical issues in people that were not going to be medically significant or cause harm. It may occur due to unnecessary screening of asymptomatic people, unneeded investigations in individuals with symptoms, or inappropriate reliance on laboratory or radiographic studies. Overdiagnosis can cause more harm than benefit. It can lead to unnecessary testing and treatment that ultimately adversely affects patient safety and well-being.
Related Resources (1)
See Primer. The vast majority of health care takes place in the outpatient, or ambulatory, setting, and a growing body of research has identified and characterized factors that influence safety in office practice, the types of errors commonly encountered in ambulatory care, and potential strategies for improving ambulatory safety.
Related Resources (1)
Originally created by the Agency for Healthcare Quality and Research (AHRQ), the Patient Safety Indicators (PSIs) reflect the quality of inpatient care as well as the rate of preventable complications and iatrogenic events.
Related Resources (1)
Patient Safety Officers are individuals assigned to lead patient safety efforts in health care organizations, and who are responsible for the management of the patient safety program. They are accountable for assessing the organization’s patient safety measures, ensuring staff are trained, promoting actions to identify and respond to patient safety events, and ensuring that senior leadership is knowledgeable about the status of the patient safety events and overall status of the program.
Patient Safety Organizations (PSOs) were established through the Patient Safety and Quality Improvement Act that authorized the Department of Health and Human Services (HHS) to establish a voluntary system of reporting and analyzing data to evaluate and improve patient safety. PSOs work with healthcare providers (e.g., hospitals, nursing homes, dialysis centers) to assist them with their patient safety programs by analyzing the data submitted and providing feedback on ways to improve patient safety. AHRQ is the agency responsible for the oversight of the PSO program.
Performance can be defined in terms of patient outcomes but is more commonly defined in terms of processes of care (e.g., the percentage of eligible diabetics who have been referred for annual retinal examinations, the percentage of children who have received immunizations appropriate for their age, patients admitted to the hospital with pneumonia who receive antibiotics within 6 hours). Pay-for-performance initiatives reflect the efforts of purchasers of health care—from the federal government to private insurers—to use their purchasing power to encourage providers to develop whatever specific quality improvement initiatives are required to achieve the specified targets. Thus, rather than committing to a specific quality improvement strategy, such as a new information system or a disease management program, which may have variable success in different institutions, pay for performance creates a climate in which provider groups will be strongly incentivized to find whatever solutions will work for them.
See Primer. Long and unpredictable work hours have been a staple of medical training for centuries. However, little attention was paid to the patient safety effects of fatigue among residents until March 1984, when Libby Zion died due to a medication-prescribing error while under the care of residents in the midst of a 36-hour shift. In 2003, the Accreditation Council for Graduate Medical Education (ACGME) implemented new rules limiting work hours for all residents, with the key components being that residents should work no more than 80 hours per week or 24 consecutive hours on duty, should not be "on-call" more than every third night, and should have 1 day off per week.
Related Resources (1)
Commonly referred to as PDSA, refers to the cycle of activities advocated for achieving process or system improvement. The cycle was first proposed by Walter Shewhart, one of the pioneers of statistical process control (see run charts) and popularized by his student, quality expert W. Edwards Deming. The PDSA cycle represents one of the cornerstones of continuous quality improvement (CQI). The components of the cycle are briefly described below:
- Plan: Analyze the problem you intend to improve and devise a plan to correct the problem.
- Do: Carry out the plan (preferably as a pilot project to avoid major investments of time or money in unsuccessful efforts).
- Study: Did the planned action succeed in solving the problem? If not, what went wrong? If partial success was achieved, how could the plan be refined?
- Act: Adopt the change piloted above as is, abandon it as a complete failure, or modify it and run through the cycle again. Regardless of which action is taken, the PDSA cycle continues, either with the same problem or a new one.
PDSA can seem like a simple way to tackle quality problems. In practice, though, many omit key steps or do not perform sufficient cycles. PDSA aims to foster rapid change, with frequent tests of improvement, so relying on, for example, quarterly data to assess the effects of the efforts to date is usually not adequate. Another way in which practice deviates from theory for PDSA is the way in which the cycles play out. PDSA cycles are typically depicted as a smooth progression, with each cycle seamlessly and iteratively building on the previous. As the number of cycles increases, their effectiveness and overall cumulative effect strengthens. In practice, this type of work involves frequent false starts, backtracking, regroupings, backsliding, and overlapping scenarios within the process. Well-executed PDSA cycles in practice involve a more complex tangle of related improvement efforts talking different aspects of the target problem.
Related Resources (1)
Preventability in the context of patient safety is the extent to which a patient safety adverse event or harm is preventable. Preventable adverse events occur because of an error or failure to apply strategies for error prevention. One in 10 patients are harmed while receiving inpatient care in hospitals and four in 10 patients are harmed in primary and outpatient care. This harm is caused by a range of adverse events, and 50%-80% of these events are preventable. In terms of prevalence, preventable patient safety events are most frequently related to diagnosis, prescription, or medication delivery processes.
Related Resources (1)
In health care, production pressure refers to delivery of services—the pressure to run hospitals at 100% capacity, with each bed filled with the sickest possible patients who are discharged at the first sign that they are stable, or the pressure to leave no operating room unused and to keep moving through the schedule for each room as fast as possible. In a survey of anesthesiologists, half of respondents stated that they had witnessed at least one case in which production pressure resulted in what they regarded as unsafe care. Examples included elective surgery in patients without adequate preoperative evaluation and proceeding with surgery despite significant contraindications.
Production pressure produces an organizational culture in which frontline personnel (and often managers) are reluctant to suggest any course of action that compromises productivity, even temporarily. For instance, in the survey of anesthesiologists, respondents reported pressure by surgeons to avoid delaying cases through additional patient evaluation or canceling cases, even when patients had clear contraindications to surgery.
Psychological safety is the belief that speaking up will not result in negative consequences for oneself, such as punishment or humiliation. Psychological safety within health care teams fosters patient safety by allowing team members to feel accepted, respected, and able to share their ideas, questions, concerns and mistakes.
See Primer. Rapid response teams represent an intuitively simple concept: when a patient demonstrates signs of imminent clinical deterioration, a team of providers is summoned to the bedside to immediately assess and treat the patient with the goal of preventing adverse clinical outcomes.
Related Resources (1)
Because mistaken substitution or reversal of alphanumeric information is such a potential hazard, read-back protocols typically include the use of phonetic alphabets, such as the NATO system ("Alpha-Bravo-Charlie-Delta-Echo...X-ray-Yankee-Zulu") now familiar to many. In health care, traditionally, read-back has been mandatory only in the context of checking to ensure accurate identification of recipients of blood transfusions. However, there are many other circumstances in which health care teams could benefit from following such protocols, for example, when communicating key lab results or patient orders over the phone, and even when exchanging information in person (e.g., handoffs).
An example of a red rule in health care might be the following: "No hospitalized patient can undergo a test of any kind, receive a medication or blood product, or undergo a procedure if they are not wearing an identification bracelet." The implication of designating this a red rule is that the moment a patient is identified as not meeting this condition, all activity must cease in order to verify the patient's identity and supply an identification band.
Health care organizations already have numerous rules and policies that call for strict adherence. The reason that some organizations are using red rules is that, unlike many standard rules, red rules will always be supported by the entire organization. In other words, when someone at the frontline calls for work to cease on the basis of a red rule, top management must always support this decision. Thus, when properly implemented, red rules should foster a culture of safety, as frontline workers will know that they can stop the line when they notice potential hazards, even when doing so may result in considerable inconvenience or be time consuming and costly, for their immediate supervisors or the organization as a whole.
Resilience is a characteristic that enables organizations to adapt to uncertain conditions in their work environment. Resilient organizations are able to anticipate risk and continuously adapt to the complexity of their work environments to prevent failure. While important, personal resilience is not the focus of this definition, but resilience as an organizational trait helps to minimize the overreliance on individual resilience through strengthening the organizational capacity to minimize disruption.
Related term: Resilience Engineering
Resilience engineering is the organizational capability to design processes and actions to systemically track data, information, evidence, and knowledge to anticipate and respond to challenges, as well as to correct disrupted processes back to standardized, improved states based on the application of lessons learned during the disruption. Processes are then hardwired to incorporate those changes and support continuous adjustment to sustain said improvements–in essence to learn from disruptions–to prevent future problems and failure and become resilient.
Related term: Resilience
Risk management in healthcare is a complex set of clinical and administrative systems, processes, procedures, and reporting structures designed to detect, monitor, assess, mitigate, and prevent risks to patients.
See Primer. Efforts to engage patients in safety efforts have focused on three areas: enlisting patients in detecting adverse events, empowering patients to ensure safe care, and emphasizing patient involvement as a means of improving the culture of safety.
Related Resources (1)
See Primer. Initially developed to analyze industrial accidents, root cause analysis is now widely deployed as an error analysis tool in health care. A central tenet of RCA is to identify underlying problems that increase the likelihood of errors while avoiding the trap of focusing on mistakes by individuals.
Related Resources (1)
The phrase "rule of thumb" probably has it origin with trades such as carpentry in which skilled workers could use the length of their thumb (roughly one inch from knuckle to tip) rather than more precise measuring instruments and still produce excellent results. In other words, they measured not using a "rule of wood" (old-fashioned way of saying ruler), but by a "rule of thumb."
See Primer. High-reliability organizations consistently minimize adverse events despite carrying out intrinsically hazardous work. Such organizations establish a culture of safety by maintaining a commitment to safety at all levels, from frontline providers to managers and executives.
Related Resources (1)
Safety I/II reflect two perspectives to understanding safety improvements. The Safety I approach focuses on identifying causes and contributing factors to adverse events without considering human performance. The Safety II approach considers variations in everyday performance to understand how things usually go right. Under the Safety-I framework, procedural violations in the health care setting might be viewed unfavorably. In the Safety-II framework, procedural violations may be seen as necessary modifications within a complex work environment. The application of both frameworks provides deeper understanding of procedural violations and facilitates the development of targeted interventions for improving safety.
SBAR (Situation, Background, Assessment, Recommendation) is a concise, standardized process to clearly communicate information between individuals or groups. The Situation names the safety issue, Background provides known evidence and context, Assessment states the impression for next steps, and the Recommendation includes the plan to improve or remedy the patient safety issue. SBARs have commonly been used to support situational awareness and improve handoff communications. SBARs are also used to analyze patient safety events and develop potential solutions to communicate with other stakeholders, such as hospital leadership.
See Primer. The term “second victim” refers to health care workers who are involved in medical errors and adverse events and experience emotional distress. Some patient safety researchers and advocates have raised concerns regarding the use of the term, and others suggest that its appropriateness depends on hospital culture and context.
Related Resources (1)
See Primer. An adverse event in which death or serious harm to a patient has occurred; usually used to refer to events that are not at all expected or acceptable e.g., an operation on the wrong patient or body part. The choice of the word sentinel reflects the egregiousness of the injury (e.g., amputation of the wrong leg) and the likelihood that investigation of such events will reveal serious problems in current policies or procedures.
Related Resources (2)
The sharp end refers to the personnel or parts of the health care system in direct contact with patients. Personnel operating at the sharp end may literally be holding a scalpel (e.g., an orthopedist who operates on the wrong leg) or figuratively be administering any kind of therapy (e.g., a nurse programming an intravenous pump) or performing any aspect of care. To complete the metaphor, the blunt end refers to the many layers of the health care system that affect the scalpels, pills, and medical devices, or the personnel wielding, administering, and operating them. Thus, an error in programming an intravenous pump would represent a problem at the sharp end, while the institution's decision to use multiple types of infusion pumps (making programming errors more likely) would represent a problem at the blunt end. The terminology of "sharp" and "blunt" ends corresponds roughly to active failures and latent conditions.
See Primer. The term "signout" is used to refer to the act of transmitting information about the patient. Handoffs and signouts have been linked to adverse clinical events in settings ranging from the emergency department to the intensive care unit.
Related Resources (1)
Six sigma refers loosely to striving for near perfection in the performance of a process or production of a product. The name derives from the Greek letter sigma, often used to refer to the standard deviation of a normal distribution. By definition, 95% of a normally distributed population falls within 2 standard deviations of the average (or "2 sigma"). This leaves 5% of observations as "abnormal" or "unacceptable." Six Sigma targets a defect rate of 3.4 per million opportunities 6 standard deviations from the population average.
When it comes to industrial performance, having 5% of a product fall outside the desired specifications would represent an unacceptably high defect rate. What company could stay in business if 5% of its product did not perform well? For example, would we tolerate a pharmaceutical company that produced pills containing incorrect dosages 5% of the time? Certainly not. But when it comes to clinical performance the number of patients who receive a proven medication, the number of patients who develop complications from a procedure we routinely accept failure or defect rates in the 2% to 5% range, orders of magnitude below Six Sigma performance.
Not every process in health care requires such near-perfect performance. In fact, one of the lessons of Reason's Swiss cheese model is the extent to which low overall error rates are possible even when individual components have many "holes." However, many high-stakes processes are far less forgiving, since a single "defect" can lead to catastrophe (e.g., wrong-site surgery, accidental administration of concentrated potassium).
Errors can be dichotomized as slips or mistakes, based on the cognitive psychology of task-oriented behavior. Slips refer to failures of schematic behaviors, or lapses in concentration (e.g., overlooking a step in a routine task due to a lapse in memory, an experienced surgeon nicking an adjacent organ during an operation due to a momentary lapse in concentration). Slips occur in the face of competing sensory or emotional distractions, fatigue, and stress. Reducing the risk of slips requires attention to the designs of protocols, devices, and work environments using checklists so key steps will not be omitted, reducing fatigue among personnel (or shifting high-risk work away from personnel who have been working extended hours), removing unnecessary variation in the design of key devices, eliminating distractions (e.g., phones) from areas where work requires intense concentration, and other redesign strategies. Slips can be contrasted with mistakes, which are failures that occur in attentional behavior such as active problem solving.
Stewardship refers to efforts by healthcare providers (e.g., clinicians, hospitals, doctor’s offices, pharmacies, etc.) to promote the safe and appropriate use of healthcare resources. Recent stewardship priorities have focused on appropriate use of opioids and antimicrobials. The concept of “stewardship” was first introduced by the World Health Organization (WHO) to clarify the practical components of governance in the health sector; their focus was on how governments take responsibility for the health system and the wellbeing of the population, fulfill health system functions, assure equity, and coordinate interaction with government and society.
Most definitions of quality emphasize favorable patient outcomes as the gold standard for assessing quality. In practice, however, one would like to detect quality problems without waiting for poor outcomes to develop in such sufficient numbers that deviations from expected rates of morbidity and mortality can be detected. Donabedian first proposed that quality could be measured using aspects of care with proven relationships to desirable patient outcomes. For instance, if proven diagnostic and therapeutic strategies are monitored, quality problems can be detected long before demonstrable poor outcomes occur.
Aspects of care with proven connections to patient outcomes fall into two general categories: process and structure. Processes encompass all that is done to patients in terms of diagnosis, treatment, monitoring, and counseling. Cardiovascular care provides classic examples of the use of process measures to assess quality. Given the known benefits of aspirin and beta-blockers for patients with myocardial infarction, the quality of care for patients with myocardial infarction can be measured in terms of the rates at which eligible patients receive these proven therapies. The percentage of eligible women who undergo mammography at appropriate intervals would provide a process-based measure for quality of preventive care for women.
Structure refers to the setting in which care occurs and the capacity of that setting to produce quality. Traditional examples of structural measures related to quality include credentials, patient volume, and academic affiliation. More recent structural measures include the adoption of organizational models for inpatient care (e.g., closed intensive care units and dedicated stroke units) and possibly the presence of sophisticated clinical information systems. Cardiovascular care provides another classic example of structural measures of quality. Numerous studies have shown that institutions that perform more cardiac surgeries and invasive cardiology procedures achieve better outcomes than institutions that see fewer patients. Given these data, patient volume represents a structural measure of quality of care for patients undergoing cardiac procedures.
In the model, each slice of cheese represents a safety barrier or precaution relevant to a particular hazard. For example, if the hazard were wrong-site surgery, slices of the cheese might include conventions for identifying sidedness on radiology tests, a protocol for signing the correct site when the surgeon and patient first meet, and a second protocol for reviewing the medical record and checking the previously marked site in the operating room. Many more layers exist. The point is that no single barrier is foolproof. They each have "holes"; hence, the Swiss cheese. For some serious events (e.g., operating on the wrong site or wrong person), even though the holes will align infrequently, even rare cases of harm (errors making it "through the cheese") will be unacceptable.
While the model may convey the impression that the slices of cheese and the location of their respective holes are independent, this may not be the case. For instance, in an emergency situation, all three of the surgical identification safety checks mentioned above may fail or be bypassed. The surgeon may meet the patient for the first time in the operating room. A hurried x-ray technologist might mislabel a film (or simply hang it backwards and a hurried surgeon not notice), "signing the site" may not take place at all (e.g., if the patient is unconscious) or, if it takes place, be rushed and offer no real protection. In the technical parlance of accident analysis, the different barriers may have a common failure mode, in which several protections are lost at once (i.e., several layers of the cheese line up).
In health care, such failure modes, in which slices of the cheese line up more often than one would expect if the location of their holes were independent of each other (and certainly more often than wings fly off airplanes) occur distressingly commonly. In fact, many of the systems problems discussed by Reason and others—poorly designed work schedules, lack of teamwork, variations in the design of important equipment between and even within institutions—are sufficiently common that many of the slices of cheese already have their holes aligned. In such cases, one slice of cheese may be all that is left between the patient and significant hazard.
See Primer. Medicine has traditionally treated quality problems and errors as failings on the part of individual providers, perhaps reflecting inadequate knowledge or skill levels. The systems approach, by contrast, takes the view that most errors reflect predictable human failings in the context of poorly designed systems (e.g., expected lapses in human vigilance in the face of long work hours or predictable mistakes on the part of relatively inexperienced personnel faced with cognitively complex situations). Rather than focusing corrective efforts on reprimanding individuals or pursuing remedial education, the systems approach seeks to identify situations or factors likely to give rise to human error and implement systems changes that will reduce their occurrence or minimize their impact on patients. This view holds that efforts to catch human errors before they occur or block them from causing harm will ultimately be more fruitful than ones that seek to somehow create flawless providers.
This systems focus includes paying attention to human factors engineering (or ergonomics), including the design of protocols, schedules, and other factors that are routinely addressed in other high-risk industries but have traditionally been ignored in medicine.
Related Resources (1)
Teams are groups of individuals who work dynamically, interdependently, and collaboratively towards a common goal, while retaining specific individual roles or functions. Team members should (1) include anyone involved in the patient care process (including leaders), (2) have clearly defined roles and responsibilities, (3) be accountable to the team for their actions and (4) stay continually informed for effective team functioning. It is important that teams are representative not only of different professions but reflect diversity in the team members (sex, age, race, ethnicity, culture, etc.) so that the team is representative of the population they serve. AHRQ has developed an evidence-based program called TeamSTEPPS® that provides tools for healthcare teams in different types of organizations, particularly focusing on improved communication.
See Primer. Providing safe health care depends on highly trained individuals with disparate roles and responsibilities acting together in the best interests of the patient. The need for improved teamwork has led to the application of teamwork training principles, originally developed in aviation, to a variety of health care settings.
Related Resources (1)
See Primer. The "Five Rights"—administering the Right Medication, in the Right Dose, at the Right Time, by the Right Route, to the Right Patient—are the cornerstone of traditional nursing teaching about safe medication practice.
While the Five Rights represent goals of safe medication administration, they contain no procedural detail, and thus may inadvertently perpetuate the traditional focus on individual performance rather than system improvement. Procedures for ensuring each of the Five Rights must take into account human factor and systems design issues (such as workload, ambient distractions, poor lighting, problems with wristbands, ineffective double check protocols, etc.) that can threaten or undermine even the most conscientious efforts to comply with the Five Rights. In the end, the Five Rights remain an important goal for safe medication practice, but one that may give the illusion of safety if not supported by strong policies and procedures, a system organized around modern principles of patient safety, and a robust safety culture.
Related Resources (1)
"Protected health information" (PHI) includes all medical records and other individually identifiable health information. "Individually identifiable information" includes data that explicitly linked to a patient as well as health information with data items with a reasonable potential for allowing individual identification.
HIPAA also requires providers to offer patients certain rights with respect to their information, including the right to access and copy their records and the right to request amendments to the information contained in their records.
Administrative protections specified by HIPAA to promote the above regulations and rights include requirements for a Privacy Officer and staff training regarding the protection of patients’ information.
Transition of care reflect the period of time when patients move between one health care unit to another that are in different locations and offer different levels of care
Related Resources (2)
Transparency in healthcare emphasizes providing information on healthcare quality, safety, and consumer experience with care in a reliable and understandable manner. Transparency is aimed at promoting patient safety by building trust between patients, providers, the organization, and society at large, with the goal of improved safety, informed communication, and increased knowledge. Transparency can occur at the individual level (i.e., disclosure of medical errors by clinicians to patients and families) as well as organizational levels (such as public reporting activities from CMS, AHRQ, and Leapfrog).
See Primer. Triggers are a type of clue that can be used to identify an adverse event (AE) or error. A simple example is identifying that the drug Naloxone was administered to determine whether there was an opioid overdose, which is an adverse event, if occurring in a clinical setting. Trigger tools are instruments that have been designed to identify adverse events so organizations are able to measure and track the events. Trigger tools allow healthcare organizations to identify greater numbers of AEs than happens with voluntary reporting. IHI has a Global Trigger Tool that has many different types of triggers for adverse events.
Related Resources (1)
See Primer. Signals for detecting likely adverse events. Triggers alert providers involved in patient safety activities to probable adverse events so they can review the medical record to determine if an actual or potential adverse event has occurred. For instance, if a hospitalized patient received naloxone (a drug used to reverse the effects of narcotics), the patient probably received an excessive dose of morphine or some other opiate. In the emergency department, the use of naloxone would more likely represent treatment of a self-inflected opiate overdose, so the trigger would have little value in that setting. But, among patients already admitted to hospital, a pharmacy could use the administration of naloxone as a "trigger" to investigate possible adverse drug events.
In cases in which the trigger correctly identified an adverse event, causative factors can be identified and, over time, interventions developed to reduce the frequency of particularly common causes of adverse events. The traditional use of triggers has been to efficiently identify adverse events after the fact. However, using triggers in real time has tremendous potential as a patient safety tool. In a study of real-time triggers in a single community hospital, for example, more than 1000 triggers were generated in 6 months, and approximately 25% led to physician action and would not have been recognized without the trigger.
As with any alert or alarm system, the threshold for generating triggers has to balance true and false positives. The system will lose its value if too many triggers prove to be false alarms. This concern is less relevant when triggers are used as chart review tools. In such cases, the tolerance of false alarms depends only on the availability of sufficient resources for medical record review. Reviewing four false alarms for every true adverse event might be quite reasonable in the context of an institutional safety program, but frontline providers would balk at (and eventually ignore) a trigger system that generated four false alarms for every true one.
Related Resources (1)
See Primer. Underdiagnosis involves delayed or missed diagnosis of a medical condition. It may occur through either acts of omission (e.g., overuse of inappropriate tests) or commission (e.g., underuse of appropriate tests). There may be healthcare disparities associated with underdiagnosis such missed diagnosis of acute myocardial infarction in women or missed diagnosis of depression in African American patients.
Related Resources (1)
Underuse refers to the failure to provide a health care service when it would have produced a favorable outcome for a patient. Standard examples include failures to provide appropriate preventive services to eligible patients (e.g., Pap smears, flu shots for elderly patients, screening for hypertension) and proven medications for chronic illnesses (steroid inhalers for asthmatics; aspirin, beta-blockers, and lipid-lowering agents for patients who have suffered a recent myocardial infarction).
Overuse refers to providing a process of care in circumstances where the potential for harm exceeds the potential for benefit. Prescribing an antibiotic for a viral infection like a cold, for which antibiotics are ineffective, constitutes overuse. The potential for harm includes adverse reactions to the antibiotics and increases in antibiotic resistance among bacteria in the community. Overuse can also apply to diagnostic tests and surgical procedures.
Misuse occurs when an appropriate process of care has been selected but a preventable complication occurs and the patient does not receive the full potential benefit of the service. Avoidable complications of surgery or medication use are misuse problems. A patient who suffers a rash after receiving penicillin for strep throat, despite having a known allergy to that antibiotic, is an example of misuse. A patient who develops a pneumothorax after an inexperienced operator attempted to insert a subclavian line would represent another example of misuse.
See Primer. Patient safety event reporting systems are ubiquitous in hospitals and are a mainstay of efforts to detect safety and quality problems. However, while event reports may highlight specific safety concerns, they do not provide insights into the epidemiology of safety problems.
Related Resources (1)
From a definitional point of view, it does not matter if frontline users are justified in working around a given policy or equipment design feature. What does matter is that the motivation for a workaround lies in getting work done, not laziness or whim. Thus, the appropriate response by managers to the existence of a workaround should not consist of reflexively reminding staff about the policy and restating the importance of following it. Rather, workarounds should trigger assessment of workflow and the various competing demands for the time of frontline personnel. In busy clinical areas where efficiency is paramount, managers can expect workarounds to arise whenever policies create added tasks for frontline personnel, especially when the extra work is out of proportion to the perceived importance of the safety goal.
See Primer. Few medical errors are as terrifying as those that involve patients who have undergone surgery on the wrong body part, undergone the incorrect procedure, or had a procedure intended for another patient. These "wrong-site, wrong-procedure, wrong-patient errors" (WSPEs) are rightly termed never events.
Related Resources (1)
Glossary
Definitions abound in the medical error and patient safety literature, with subtle and not-so-subtle variations in the meanings of important terms. This glossary aims to provide the most straightforward terminology, with definitions that encourage their distinct application in patient safety.
Accountability in healthcare represents the procedures and processes by which one party justifies and takes responsibility for its activities. This can be both at an individual and organizational level. Individuals must be held accountable for their actions, but organizations also play a role and must also be accountable for their structures and systems.
Organizational accountability is dependent upon a safety culture that accepts that adverse events and medical errors should lead to organizational learning (as opposed to a punitive culture in which an organization places blame on individuals rather than making systematic changes based on learning from the errors). When individual errors are made, it is important that organizations separate human behaviors that fall into the reckless (conscious disregard) and intentional categories versus behaviors that reflect human error (e.g., should have done something different) or negligence (failure to exercise expected care). Human error and negligent errors are typically the result of lack of knowledge, misremembering, or misplaced priorities.
The terms active and latent as applied to errors were coined by Reason. Active errors occur at the point of contact between a human and some aspect of a larger system (e.g., a human machine interface). They are generally readily apparent (e.g., pushing an incorrect button, ignoring a warning light) and almost always involve someone at the frontline. Active failures are sometimes referred to as errors at the sharp end, figuratively referring to a scalpel. In other words, errors at the sharp end are noticed first because they are committed by the person closest to the patient. This person may literally be holding a scalpel (e.g., an orthopedist operating on the wrong leg) or figuratively be administering any kind of therapy (e.g., a nurse programming an intravenous pump) or performing any aspect of care. Latent errors (or latent conditions), in contrast, refer to less apparent failures of organization or design that contributed to the occurrence of errors or allowed them to cause harm to patients. To complete the metaphor, latent errors are those at the other end of the scalpel, the blunt end referring to the many layers of the health care system that affect the person "holding" the scalpel.
See Primer. An adverse event (i.e., injury resulting from medical care) involving medication use.
Examples:
- anaphylaxis to penicillin
- major hemorrhage from heparin
- aminoglycoside-induced renal failure
- agranulocytosis from chloramphenicol
As with the more general term adverse event, the occurrence of an ADE does not necessarily indicate an error or poor quality of care. ADEs that involve an element of error (either of omission or commission) are often referred to as preventable ADEs. Medication errors that reached the patient but by good fortune did not cause any harm are often called potential ADEs. For instance, a serious allergic reaction to penicillin in a patient with no prior such history is an ADE, but so is the same reaction in a patient who has a known allergy history but receives penicillin due to a prescribing oversight. The former occurrence would count as an adverse drug reaction or non-preventable ADE, while the latter would represent a preventable ADE. If a patient with a documented serious penicillin allergy received a penicillin-like antibiotic but happened not to react to it, this event would be characterized as a potential ADE.
An ameliorable ADE is one in which the patient experienced harm from a medication that, while not completely preventable, could have been mitigated. For instance, a patient taking a cholesterol-lowering agent (statin) may develop muscle pains and eventually progress to a more serious condition called rhabdomyolysis. Failure to periodically check a blood test that assesses muscle damage or failure to recognize this possible diagnosis in a patient taking statins who subsequently develops rhabdomyolysis would make this event an ameliorable ADE: harm from medical care that could have been lessened with earlier, appropriate management. Again, the initial development of some problem was not preventable, but the eventual harm that occurred need not have been so severe, hence the term ameliorable ADE.
Related Resources (1)
Adverse effect produced by the use of a medication in the recommended manner i.e., a drug side effect. These effects range from nuisance effects (e.g., dry mouth with anticholinergic medications) to severe reactions, such as anaphylaxis to penicillin. Adverse drug reactions represent a subset of the broad category of adverse drug events specifically, they are non-preventable ADEs.
See Primer. Any injury caused by medical care.
Examples:
- pneumothorax from central venous catheter placement
- anaphylaxis to penicillin
- postoperative wound infection
- hospital-acquired delirium (or "sundowning") in elderly patients
Identifying something as an adverse event does not imply "error," "negligence," or poor quality care. It simply indicates that an undesirable clinical outcome resulted from some aspect of diagnosis or therapy, not an underlying disease process. Thus, pneumothorax from central venous catheter placement counts as an adverse event regardless of insertion technique. Similarly, postoperative wound infections count as adverse events even if the operation proceeded with optimal adherence to sterile procedures, the patient received appropriate antibiotic prophylaxis in the perioperative setting, and so on. (See also iatrogenic).
Related Resources (1)
See Primer. Being discharged from the hospital can be dangerous for patients. Nearly 20% of patients experience an adverse event in the first 3 weeks after discharge, including medication errors, health care associated infections, and procedural complications.
Related Resources (1)
See Primer. Computerized warnings and alarms are used to improve safety by alerting clinicians of potentially unsafe situations. However, this proliferation of alerts may have negative implications for patient safety as well.
Related Resources (1)
Beers criteria define medications that generally should be avoided in ambulatory elderly patients, doses or frequencies of administration that should not be exceeded, and medications that should be avoided in older persons known to have any of several common conditions. The criteria were originally developed using a formal consensus process for combining reviews of the evidence with expert input. The criteria for inappropriate use address commonly used categories of medications such as sedative-hypnotics, antidepressants, antipsychotics, antihypertensives, nonsteroidal anti-inflammatory agents, oral hypoglycemics, analgesics, dementia treatments, platelet inhibitors, histamine-2 blockers, antibiotics, decongestants, iron supplements, muscle relaxants, gastrointestinal antispasmodics, and antiemetics. The criteria were intended to guide clinical practice, but also to inform quality assurance review and health services research.
Most would agree that prescriptions for medications deemed inappropriate according to Beers criteria represent poor quality care. Unfortunately, harm does not only occur from receipt of these inappropriately prescribed medications. In one comprehensive national study of medication-related emergency department visits for elderly patients, most problems involved common and important medications not considered inappropriate according to the Beers criteria principally, oral anticoagulants (e.g., warfarin), antidiabetic agents (e.g., insulin), and antiplatelet agents (aspirin and clopidogrel.
Best practices in health care as considered the ‘best way’ to identify, collect, evaluate, and disseminate information; implement practices; and/or and monitor the outcomes of health care interventions for patients or population groups with defined indications or conditions. The term “best practices” is somewhat controversial, as some “best practices” may not be supported by rigorous evidence. Therefore, there has been a transition to using “evidence-based practice” or the “best available evidence” to demonstrate that the practice is grounded in empirical research. Examples of evidence-based best practices include surgical pre-op checklists, sepsis bundles, and reducing the use of indwelling catheters.
The blunt end refers to the many layers of the health care system not in direct contact with patients, but which influence the personnel and equipment at the sharp end who do contact patients. The blunt end thus consists of those who set policy, manage health care institutions, and design medical devices, and other people and forces, which, though removed in time and space from direct patient care, nonetheless affect how care is delivered. Thus, an error programming an intravenous pump would represent a problem at the sharp end, while the institution's decision to use multiple different types of infusion pumps, making programming errors more likely, would represent a problem at the blunt end. The terminology of "sharp" and "blunt" ends corresponds roughly to active failures and latent conditions.
A bundle is a set of evidence-based interventions that, when performed consistently and reliably, has been shown to improve outcomes and safety in health care. A bundle is typically comprised of a small number of clinical practices (usually 3-5) which are supported by scientifically robust clinical evidence that are all performed cohesively for maximal impact. Examples include bundles to improve maternal care and timing identification and treatment of sepsis.
See Primer. Burnout is a syndrome of emotional exhaustion, depersonalization, and decreased sense of accomplishment at work that results in overwhelming symptoms of fatigue, exhaustion, cynical detachment, and feelings of ineffectiveness. Burnout among health care professionals is widely understood as an organizational problem in health care that needs to be addressed and has been associated with increased patient safety incidents, including medical errors, reduced patient satisfaction, and poorer safety and quality ratings.
Related Resources (1)
See Primer. Though a seemingly simple intervention, checklists have played a leading role in the most significant successes of the patient safety movement, including the near-elimination of central line associated bloodstream infections in many intensive care units.
Related Resources (1)
See Primer. Any system designed to improve clinical decision-making related to diagnostic or therapeutic processes of care. Typically a decision support system responds to "triggers" or "flags" specific diagnoses, laboratory results, medication choices, or complex combinations of such parameters and provides information or recommendations directly relevant to a specific patient encounter.
CDSSs address activities ranging from the selection of drugs (e.g., the optimal antibiotic choice given specific microbiologic data) or diagnostic tests to detailed support for optimal drug dosing and support for resolving diagnostic dilemmas. Structured antibiotic order forms represent a common example of paper-based CDSSs. Although such systems are still commonly encountered, many people equate CDSSs with computerized systems in which software algorithms generate patient-specific recommendations by matching characteristics, such as age, renal function, or allergy history, with rules in a computerized knowledge base.
The distinction between decision support and simple reminders can be unclear, but usually reminder systems are included as decision support if they involve patient-specific information. For instance, a generic reminder (e.g., "Did you obtain an allergy history?") would not be considered decision support, but a warning (e.g., "This patient is allergic to codeine.") that appears at the time of entering an order for codeine would be. A recent systematic review estimated the pooled effects for simple computer reminders and more complex decision support provided at the point of care (i.e., as clinicians entered orders in computerized provider order entry systems or performed clinical documentation in electronic medical records).
Related Resources (1)
An event or situation that did not produce patient injury, but only because of chance. This good fortune might reflect robustness of the patient (e.g., a patient with penicillin allergy receives penicillin, but has no reaction) or a fortuitous, timely intervention (e.g., a nurse happens to realize that a physician wrote an order in the wrong chart). Such events have also been termed near miss incidents.
Closed loop communication consists of exchanging clear, concise information, and acknowledging receipt of the information to confirm its understanding. The communication is addressed to a specific person on the clinical team by name and the recipient repeats the message back to the sender. Such communication enhances patient safety by preventing confusion, ensuring that teams operate under a shared mental model, and that a specific person is responsible for completing the task.
Cognitive biases are ways in which a particular person understands events, facts, and other people based on their own set of beliefs and experiences, which may or may not be reasonable or accurate. People are often unaware of the influence of their cognitive biases. Examples of common cognitive biases include:
- Confirmation bias (e.g., neglecting evidence that goes against your belief); anchoring bias (prioritizing information/data that supports one’s initial impressions);
- Framing bias (the manner by which data are presented);
- Authority bias (when a higher authority provides information);
- Affect heuristic (when actions are swayed by emotion versus rational decisions).
Cognitive bias impacts patient safety in a variety of ways. For example, cognitive biases can lead to diagnostic errors because they disrupt physicians’ and advanced practice providers’ processes to gather and interpret evidence and take appropriate actions. Authority bias is common in healthcare; for example, nurses tend to accept opinions of physicians on face value.
Related terms: Confirmation bias, availability bias, rule of thumb
Communication (disclosure) and resolution programs (CRPs) emphasize early admission of adverse events and proactive approaches to resolving patient safety issues. CRPs offer patients empathetic treatment and care after adverse events, even when no harm occurs. These programs focus on transparency, recognizing accountability, acting in a fair, just manner; the use and sustainability of practices to enhance patient safety; and changing disclosure communications to be truly transparent. The CANDOR toolkit, developed by AHRQ, provides organizations with tools necessary to implement a CRP. Whereas the historical approach in response to unexpected harm often followed a "deny-and-defend" strategy (e.g., providing limited information to patients and families, avoiding admission of fault), the CANDOR toolkit uses a person-centered approach and promotes greater transparency and early sharing of errors with patients and families.
Related term: Transparency
Compassion fatigue refers to the physical and mental exhaustion and emotional withdrawal experienced by individuals who care for sick or traumatized people over an extended period. Compassion fatigue can decrease effective teamwork behaviors and increase secondary stress, burnout, depression, or anxiety as well as escalating the use of negative coping behaviors – all of which may have a negative impact on patient safety, as these healthcare workers may commit more errors.
Related term: Burnout
Complexity theory differs importantly from systems thinking in its emphasis on the interaction between local systems and their environment (such as the larger system in which a given hospital or clinic operates). It is often tempting to ignore the larger environment as unchangeable and therefore outside the scope of quality improvement or patient safety activities. According to complexity theory, however, behavior within a hospital or clinic (e.g., non-compliance with a national practice guideline) can often be understood only by identifying interactions between local attributes and environmental factors.
See Primer. Computerized provider order entry systems ensure standardized, legible, and complete orders, and especially when paired with decision support systems have the potential to sharply reduce medication prescribing errors.
Related Resources (1)
The tendency to focus on evidence that supports a working hypothesis, such as a diagnosis in clinical medicine, rather than to look for evidence that refutes it or provides greater support to an alternative diagnosis. Suppose that a 65-year-old man with a past history of angina presents to the emergency department with acute onset of shortness of breath. The physician immediately considers the possibility of cardiac ischemia, so asks the patient if he has experienced any chest pain. The patient replies affirmatively. Because the physician perceives this answer as confirming his working diagnosis, he does not ask if the chest pain was pleuritic in nature, which would decrease the likelihood of an acute coronary syndrome and increase the likelihood of pulmonary embolism (a reasonable alternative diagnosis for acute shortness of breath accompanied by chest pain). The physician then orders an EKG and cardiac troponin. The EKG shows nonspecific ST changes and the troponin returns slightly elevated.
Of course, ordering an EKG and testing cardiac enzymes is appropriate in the work-up of acute shortness of breath, especially when it is accompanied by chest pain and in a patient with known angina. The problem is that these tests may be misleading, since positive results are consistent not only with acute coronary syndrome but also with pulmonary embolism. To avoid confirmation in this case, the physician might have obtained an arterial blood glass or a D-dimer level. Abnormal results for either of these tests would be relatively unlikely to occur in a patient with an acute coronary syndrome (unless complicated by pulmonary edema), but likely to occur with pulmonary embolism. These results could be followed up by more direct testing for pulmonary embolism (e.g., with a helical CT scan of the chest), whereas normal results would allow the clinician to proceed with greater confidence down the road of investigating and managing cardiac ischemia.
This vignette was presented as if information were sought in sequence. In many cases, especially in acute care medicine, clinicians have the results of numerous tests in hand when they first meet a patient. The results of these tests often do not all suggest the same diagnosis. The appeal of accentuating confirmatory test results and ignoring nonconfirmatory ones is that it minimizes cognitive dissonance.
A related cognitive trap that may accompany confirmation bias and compound the possibility of error is "anchoring bias" the tendency to stick with one's first impressions, even in the face of significant disconfirming evidence.
Crisis management is the process by which a team or organization deals with a major event that threatens to harm the organization, its stakeholders, or the general public. Examples of events that may require crisis management include significant adverse events (death of a patient due to a medical error) or a significant environmental event such as a fire. The COVID-19 pandemic is also an example – a public health emergency requiring crisis management early in the event.
A term made famous by a classic human factors study by Cooper of "anesthetic mishaps," though the term had first been coined in the 1950s. Cooper and colleagues brought the technique of critical incident analysis to a wide audience in health care but followed the definition of the originator of the technique. They defined critical incidents as occurrences that are "significant or pivotal, in either a desirable or an undesirable way," though Cooper and colleagues (and most others since) chose to focus on incidents that had potentially undesirable consequences. This concept is best understood in the context of the type of investigation that follows, which is very much in the style of root cause analysis. Thus, significant or pivotal means that there was significant potential for harm (or actual harm), but also that the event has the potential to reveal important hazards in the organization. In many ways, it is the spirit of the expression in quality improvement circles, "every defect is a treasure." In other words, these incidents, whether near misses or disasters in which significant harm occurred, provide valuable opportunities to learn about individual and organizational factors that can be remedied to prevent similar incidents in the future.
Cultural competence includes individual attitudes and behaviors and refers to one’s capacity to appreciate, respect, and interact with members of a different social or cultural group. In healthcare, it includes the ability to provide culturally sensitive care to individuals. To provide person-centered, high quality, and safe care, health care professionals must be prepared to tailor care to prevent adverse events or harm to individual patients from different groups (e.g., race, ethnicity, gender, language, religion, social status). Research has shown that health literacy, English proficiency, lack of trust, and other cultural issues can lead to adverse events, particularly medication errors. Other terms that have been associated with cultural competence include cultural intelligence (knowledge about various cultures and their social context) and cultural humility, both of which assume an approach to care where the provider is sensitive to the cultural context of patients and avoids making assumptions about the patient’s beliefs and environment.
See Primer. Debriefing is a brief, planned, and non-threatening conversation that is conducted to review a procedure or event. The goal is to get individuals involved together right after the procedure or event to discuss what went well and to identify areas for improvement. A debrief can help obtain new information after patient safety events such as near misses, adverse events, or medical errors.
Related Resources (1)
Typically a decision support system responds to "triggers" or "flags"—specific diagnoses, laboratory results, medication choices, or complex combinations of such parameters—and provides information or recommendations directly relevant to a specific patient encounter. For instance, ordering an aminoglycoside for a patient with creatinine above a certain value might trigger a message suggesting a dose adjustment based on the patient’s decreased renal function.
See Primer. Deprescribing is the process of supervised medication discontinuation or dose reduction to reduce potentially inappropriate medication (PIM) use. Deprescribing is one intervention that can be applied to reduce the risk for adverse drug events (ADEs) or medication errors associated with polypharmacy.
Related Resources (1)
See Primer. Thousands of patients die every year due to diagnostic errors. While clinicians' cognitive biases play a role in many diagnostic errors, underlying health care system problems also contribute to missed and delayed diagnoses.
Related Resources (1)
See Primer. Many victims of medical errors never learn of the mistake, because the error is simply not disclosed. Physicians have traditionally shied away from discussing errors with patients, due to fear of precipitating a malpractice lawsuit and embarrassment and discomfort with the disclosure process.
Related Resources (1)
See Primer. Popular media often depicts physicians as brilliant, intimidating, and condescending in equal measures. This stereotype, though undoubtedly dramatic and even amusing, obscures the fact that disruptive and unprofessional behavior by clinicians poses a definite threat to patient safety.
Related Resources (1)
See Primer. Long and unpredictable work hours have been a staple of medical training for centuries. In 2003, the Accreditation Council for Graduate Medical Education (ACGME) implemented new rules limiting duty hours for all residents to reduce fatigue. The implementation of resident duty-hour restrictions has been controversial, as evidence regarding its impact on patient safety has been mixed.
Related Resources (1)
See Primer. Patient and caregiver engagement is centered on providers, patients, and caregivers working together to improve health. A patient’s greater engagement in healthcare contributes to improved health outcomes. Patients want to be engaged in their healthcare decision-making process, and those who are engaged as decision-makers in their own care tend to be healthier and experience better outcomes. Efforts to engage patients and caregivers in safety efforts have focused on three areas: enlisting patients and caregivers in detecting adverse events, empowering patients and caregivers to ensure safe care, and emphasizing patient and caregiver involvement as a means of improving the culture of safety.
Related Resources (1)
An act of commission (doing something wrong) or omission (failing to do the right thing) that leads to an undesirable outcome or significant potential for such an outcome. For instance, ordering a medication for a patient with a documented allergy to that medication would be an act of commission. Failing to prescribe a proven medication with major benefits for an eligible patient (e.g., low-dose unfractionated heparin as venous thromboembolism prophylaxis for a patient after hip replacement surgery) would represent an error of omission.
Errors of omission are more difficult to recognize than errors of commission but likely represent a larger problem. In other words, there are likely many more instances in which the provision of additional diagnostic, therapeutic, or preventive modalities would have improved care than there are instances in which the care provided quite literally should not have been provided. In many ways, this point echoes the generally agreed-upon view in the health care quality literature that underuse far exceeds overuse, even though the latter historically received greater attention. (See definition for Underuse, Overuse, Misuse.) In addition to commission vs. omission, three other dichotomies commonly appear in the literature on errors: active failures vs. latent conditions, errors at the sharp end vs. errors at the blunt end, and slips vs. mistakes.
Error chain generally refers to the series of events that led to a disastrous outcome, typically uncovered by a root cause analysis. Sometimes the chain metaphor carries the added sense of inexorability, as many of the causes are tightly coupled, such that one problem begets the next. A more specific meaning of error chain, especially when used in the phrase "break the error chain," relates to the common themes or categories of causes that emerge from root cause analyses. These categories go by different names in different settings, but they generally include (1) failure to follow standard operating procedures, (2) poor leadership, (3) breakdowns in communication or teamwork, (4) overlooking or ignoring individual fallibility, and (5) losing track of objectives. Used in this way, "break the error chain" is shorthand for an approach in which team members continually address these links as a crisis or routine situation unfolds. The checklists that are included in teamwork training programs have categories corresponding to these common links in the error chain (e.g., establish a team leader, assign roles and responsibilities, and monitor your teammates).
The concept of evidence-based treatments has particular relevance to patient safety, because many recommended methods for measuring and improving safety problems have been drawn from other high-risk industries, without any studies to confirm that these strategies work well in health care (or, in many cases, that they work well in the original industry). The lack of evidence supporting widely recommended (sometimes even mandated) patient safety practices contrasts sharply with the rest of clinical medicine. While individual practitioners may employ diagnostic tests or administer treatments of unproven value, professional organizations typically do not endorse such aspects of care until well-designed studies demonstrate that these diagnostic or treatment strategies confer net benefit to patients (i.e., until they become evidence-based). Certainly, diagnostic and therapeutic processes do not become standard of care or in any way mandated until they have undergone rigorous evaluation in well-designed studies.
In patient safety, by contrast, patient safety goals established at state and national levels (sometimes even mandated by regulatory agencies or by law) often reflect ideas that have undergone little or no empiric evaluation. Just as in clinical medicine, promising safety strategies sometimes can turn out to confer no benefit or even create new problems—hence the need for rigorous evaluations of candidate patient safety strategies just as in other areas of medicine. That said, just how high to set the bar for the evidence required to justify actively disseminating patient safety and quality improvement strategies is a subject that has received considerable attention in recent years. Some leading thinkers in patient safety argue that an evidence bar comparable to that used in more traditional clinical medicine would be too high, given the difficulty of studying complex social systems such as hospitals and clinics, and the high costs of studying interventions such as rapid response teams or computerized order entry.
Error analysis may involve retrospective investigations (as in Root Cause Analysis) or prospective attempts to predict "error modes." Different frameworks exist for predicting possible errors. One commonly used approach is failure mode and effect analysis (FMEA), in which the likelihood of a particular process failure is combined with an estimate of the relative impact of that error to produce a "criticality index." By combining the probability of failure with the consequences of failure, this index allows for the prioritization of specific processes as quality improvement targets. For instance, an FMEA analysis of the medication dispensing process on a general hospital ward might break down all steps from receipt of orders in the central pharmacy to filling automated dispensing machines by pharmacy technicians. Each step in this process would be assigned a probability of failure and an impact score, so that all steps could be ranked according to the product of these two numbers. Steps ranked at the top (ie, those with the highest "criticality indices") would be prioritized for error proofing.
A common process used to prospectively identify error risk within a particular process. FMEA begins with a complete process mapping that identifies all the steps that must occur for a given process to occur (e.g., programming an infusion pump or preparing an intravenous medication in the pharmacy). With the process mapped out, the FMEA then continues by identifying the ways in which each step can go wrong (i.e., the failure modes for each step), the probability that each error will be detected (i.e., so that it can be corrected before causing harm), and the consequences or impact of the error not being detected. The estimates of the likelihood of a particular process failure, the chance of detecting such failure, and its impact are combined numerically to produce a criticality index.
This criticality index provides a rough quantitative estimate of the magnitude of hazard posed by each step in a high-risk process. Assigning a criticality index to each step allows prioritization of targets for improvement. For instance, an FMEA analysis of the medication-dispensing process on a general hospital ward might break down all steps from receipt of orders in the central pharmacy to filling automated dispensing machines by pharmacy technicians. Each step in this process would be assigned a probability of failure and an impact score, so that all steps could be ranked according to the product of these two numbers. Steps ranked at the top (i.e., those with the highest criticality indices) would be prioritized for error proofing.
FMEA makes sense as a general approach and it (or similar prospective error-proofing techniques) has been used in other high-risk industries. However, the reliability of the technique is not clear. Different teams charged with analyzing the same process may identify different steps in the process, assign different risks to the steps, and consequently prioritize different targets for improvement.
See Primer. Failure to rescue is shorthand for failure to rescue (i.e., prevent a clinically important deterioration, such as death or permanent disability) from a complication of an underlying illness (e.g., cardiac arrest in a patient with acute myocardial infarction) or a complication of medical care (e.g., major hemorrhage after thrombolysis for acute myocardial infarction). Failure to rescue thus provides a measure of the degree to which providers responded to adverse occurrences (e.g., hospital-acquired infections, cardiac arrest or shock) that developed on their watch. It may reflect the quality of monitoring, the effectiveness of actions taken once early complications are recognized, or both.
The technical motivation for using failure to rescue to evaluate the quality of care stems from the concern that some institutions might document adverse occurrences more assiduously than other institutions. Therefore, using lower rates of in-hospital complications by themselves may simply reward hospitals with poor documentation. However, if the medical record indicates that a complication has occurred, the response to that complication should provide an indicator of the quality of care that is less susceptible to charting bias.
Related Resources (1)
See Primer. The process when one health care professional updates another on the status of one or more patients for the purpose of taking over their care. Typical examples involve a physician who has been on call overnight telling an incoming physician about patients she has admitted so he can continue with their ongoing management, know what immediate issues to watch out for, and so on. Nurses similarly conduct a handover at the end of their shift, updating their colleagues about the status of the patients under their care and tasks that need to be performed. When the outgoing nurses return for their next duty period, they will in turn receive new updates during the change of shift handover.
Handovers in care have always carried risks: a professional who spent hours assessing and managing a patient, upon completion of her work, provides a brief summary of the salient features of the case to an incoming professional who typically has other unfamiliar patients he must get to know. The summary may leave out key details due to oversight, exacerbated by an unstructured process and being rushed to finish work. Even structured, fairly thorough summaries during handovers may fail to capture nuances that could subsequently prove relevant.
In addition to handoffs between professionals working in the same clinical unit, shorter lengths of stay in hospitals and other occupancy issues have increased transitions between settings, with patients more often move from one ward to another or from one institution to another (e.g., from an acute care hospital to a rehabilitation facility or skilled nursing facility). Due to the increasing recognition of hazards associated with these transitions in care, the term "handovers" is often used to refer to the information transfer that occurs from one clinical setting to another (e.g., from hospital to nursing home) not just from one professional to another.
Related Resources (1)
See Primer. Broadly, harm refers to the impairment of the anatomy or physiology of the body and physical, social, or psychological issues arising from the impairment such as disease, disability, or death. In the context of patient safety, the term “adverse event” is used to describe harm to patients that is caused by medical care, as opposed to harm caused by underlying disease or disability. Adverse events can be preventable, ameliorable, or the result of negligence.
Related Resources (1)
See Primer. Although long accepted by clinicians as an inevitable hazard of hospitalization, recent efforts demonstrate that relatively simple measures can prevent the majority of health care associated infections. As a result, hospitals are under intense pressure to reduce the burden of these infections.
See Primer. Individuals' ability to find, process, and comprehend the basic health information necessary to act on medical instructions and make decisions about their health. Numerous studies have documented the degree to which numerous patients do not understand basic information or instructions related to general aspects of their medical care, their medications, and procedures they will undergo. The limited ability to comprehend medical instructions or information in some cases reflects obvious language barriers (e.g., reviewing medication instructions in English with a patient who speaks very little English), but the scope of the problem reflects broader issues related to levels of education, cross-cultural issues, and overuse of technical terminology by clinicians.
Related Resources (3)
Loosely defined or informal rules often arrived at through experience or trial and error that make assessments and decisions (e.g., gastrointestinal complaints that wake patients up at night are unlikely to be benign in nature). Heuristics provide cognitive shortcuts in the face of complex situations, and thus serve an important purpose. Unfortunately, they can also turn out to be wrong, with frequently used heuristics often forming the basis for the many cognitive biases, such as anchoring bias, availability bias, confirmation bias, and others, that have received attention in the literature on diagnostic errors and medical decision making.
See Primer. High reliability organizations refer to organizations or systems that operate in hazardous conditions but have fewer than their fair share of adverse events. Commonly discussed examples include air traffic control systems, nuclear power plants, and naval aircraft carriers. It is worth noting that, in the patient safety literature, HROs are considered to operate with nearly failure-free performance records, not simply better than average ones. This shift in meaning is somewhat understandable given that the failure rates in these other industries are so much lower than rates of errors and adverse events in health care. This comparison glosses over the difference in significance of a "failure" in the nuclear power industry compared with one in health care. The point remains, however, that some organizations achieve consistently safe and effective performance records despite unpredictable operating environments or intrinsically hazardous endeavors. Detailed case studies of specific HROs have identified some common features, which have been offered as models for other organizations to achieve substantial improvements in their safety records. These features include:
- Preoccupation with failure the acknowledgment of the high-risk, error-prone nature of an organization's activities and the determination to achieve consistently safe operations.
- Commitment to resilience the development of capacities to detect unexpected threats and contain them before they cause harm, or bounce back when they do.
- Sensitivity to operations an attentiveness to the issues facing workers at the frontline. This feature comes into play when conducting analyses of specific events (e.g., frontline workers play a crucial role in root cause analyses by bringing up unrecognized latent threats in current operating procedures), but also in connection with organizational decision-making, which is somewhat decentralized. Management units at the frontline are given some autonomy in identifying and responding to threats, rather than adopting a rigid top-down approach.
- A culture of safety, in which individuals feel comfortable drawing attention to potential hazards or actual failures without fear of censure from management.
Related Resources (1)
In the context of safety analysis, hindsight bias refers to the tendency to judge the events leading up to an accident as errors because the bad outcome is known. The more severe the outcome, the more likely that decisions leading up to this outcome will be judged as errors. Judging the antecedent decisions as errors implies that the outcome was preventable. In legal circles, one might use the phrase "but for," as in "but for these errors in judgment, this terrible outcome would not have occurred." Such judgments return us to the concept of "hindsight is 20/20." Those reviewing events after the fact see the outcome as more foreseeable and therefore more preventable than they would have appreciated in real time.
Human factors are the strengths and constraints in the design of interactive systems and actions involving people, tools and technology, and work environments to ensure their safety, reliability, and effectiveness. Ergonomics is a related term, which is the study of the interplay between human factors, technologies, and work environments.
Related term: human factors engineering
Related Resources (1)
See Primer. Human factors engineering is the discipline that attempts to identify and address safety problems that arise due to the interaction between people, technology, and work environments.
Related Resources (1)
Human-centered design is a problem-solving approach that focuses on developing and optimizing the efficiency, effectiveness, and usability of products and interactive systems, thereby increasing their safety. This approach prevents patient safety incidents by considering human capabilities, skills, limitations, and needs. Solutions are developed by involving end-user perspectives throughout the process.
An adverse effect of medical care, rather than of the underlying disease (literally "brought forth by healer," from Greek iatros, for healer, and gennan to bring forth); equivalent to adverse event.
Inattentional blindness is a cognition concept exploring why individuals in an intense or complex situation can miss an important event or data point because competing attentional tasks divide their focus. Individuals experiencing inattentional blindness unknowingly orient themselves toward, and process information from, only one part of their environment, while excluding others which can contribute to task omissions and missed signals, such as incorrect medication administration.
See Primer. Patient safety event reporting systems are ubiquitous in hospitals and are a mainstay of efforts to detect safety and quality problems. However, while event reports may highlight specific safety concerns, they do not provide insights into the epidemiology of safety problems.
Related Resources (1)
Legislation governing the requirements of, and conditions under which, consent must be obtained varies by jurisdiction. Most general guidelines require patients to be informed of the nature of their condition, the proposed procedure, the purpose of the procedure, the risks and benefits of the proposed treatments, the probability of the anticipated risks and benefits, alternatives to the treatment and their associated risks and benefits, and the risks and benefits of not receiving the treatment or procedure.
Although the goals of informed consent are irrefutable, consent is often obtained in a haphazard, pro forma fashion, with patients having little true understanding of procedures to which they have consented. Evidence suggests that asking patients to restate the essence of the informed consent improves the quality of these discussions and makes it more likely that the consent is truly informed.
Patient safety innovations are defined as “implementation of new or altered products, tools, services, processes, systems, policies, organizational structures, or business models implemented to improve or enhance quality of care and reduce harm. Patient safety Innovations may be local, regional, national, or international in scope and those included on the AHRQ PSNet Innovation Exchange have implementation data available demonstrating impact.
The phrase "just culture" was popularized in the patient safety lexicon by a report that outlined principles for achieving a culture in which frontline personnel feel comfortable disclosing errors including their own while maintaining professional accountability. The examples in the report relate to transfusion safety, but the principles clearly generalize across domains within health care organizations.
Traditionally, health care's culture has held individuals accountable for all errors or mishaps that befall patients under their care. By contrast, a just culture recognizes that individual practitioners should not be held accountable for system failings over which they have no control. A just culture also recognizes many individual or "active" errors represent predictable interactions between human operators and the systems in which they work. However, in contrast to a culture that touts "no blame" as its governing principle, a just culture does not tolerate conscious disregard of clear risks to patients or gross misconduct (e.g., falsifying a record, performing professional duties while intoxicated).
In summary, a just culture recognizes that competent professionals make mistakes and acknowledges that even competent professionals will develop unhealthy norms (shortcuts, "routine rule violations"), but has zero tolerance for reckless behavior.
The terms active and latent as applied to errors were coined by Reason. Latent errors (or latent conditions) refer to less apparent failures of organization or design that contributed to the occurrence of errors or allowed them to cause harm to patients. For instance, whereas the active failure in a particular adverse event may have been a mistake in programming an intravenous pump, a latent error might be that the institution uses multiple different types of infusion pumps, making programming errors more likely. Thus, latent errors are quite literally "accidents waiting to happen." Latent errors are sometimes referred to as errors at the blunt end, referring to the many layers of the health care system that affect the person "holding" the scalpel. Active failures, in contrast, are sometimes referred to as errors at the sharp end, or the personnel and parts of the health care system in direct contact with patients.
Lean principles include standardized work, value stream, workflow, reducing waste, and efficiency with a focus on the customer experience. Application of Lean principles to healthcare settings increases patient safety and ensures that the patient’s healthcare experience is effective and of high quality. Researchers have used Lean methodology to improve processes related to chemotherapy preparation, surgical instrument sterilization, and medication administration.
Learning systems build functions, networks, and processes to use data, information, evidence, and knowledge to implement change and, ultimately, to sustain improvements. Learning systems focus both on internal improvement and information sharing, as well as external distribution of data and knowledge using technology to generate improvement in the larger environment in which the organization functions. Learning systems nurture a culture that enables information sharing and improved collective awareness across the spectrum of the healthcare system.
Without taking anything away from the particular hospitals that have achieved Magnet status, the program as a whole has its critics. In fact, at least one state nurses' association (Massachusetts) has taken an official position critiquing the program, charging that its perpetuation reflects the financial interests of its sponsoring organization and the participating hospitals more than the goals of improving health care quality or improving working conditions for nurses. Regardless of the particulars of the Magnet Recognition Program and the lack of persuasive evidence linking magnet status to quality, to many the term magnet hospital connotes a hospital that delivers superior patient care and, partly on this basis, attracts and retains high-quality nurses.
See Primer. The concept of medical emergency teams (also known as rapid response teams) is that of a cardiac arrest team with more liberal calling criteria. Instead of just frank respiratory or cardiac arrest, medical emergency teams respond to a wide range of worrisome, acute changes in patients' clinical status, such as low blood pressure, difficulty breathing, or altered mental status. In addition to less stringent calling criteria, the concept of medical emergency teams de-emphasizes the traditional hierarchy in patient care in that anyone can initiate the call. Nurses, junior medical staff, or others involved in the care of patients can call for the assistance of the medical emergency team whenever they are worried about a patient's condition, without having to wait for more senior personnel to assess the patient and approve the decision to call for help.
Related Resources (1)
The Medication Administration Record (MAR) is a legal and permanent documentation of a patient’s medications administered, typically by a nurse in an acute or sub-acute setting. Use of technology (such as bar-coded medication administration) and standardized procedures (such as two-person verification or application of the “rights” of medication administration are included in the medication administration process to improve patient safety.
See Primer. Unintended inconsistencies in medication regimens occur with any transition in care. Medication reconciliation refers to the process of avoiding such inadvertent inconsistencies by reviewing the patient's current medication regimen and comparing it with the regimen being considered for the new setting of care.
Related Resources (1)
A medication safety officer is a clinical practitioner in a leadership role that has expertise in safe medication management practices across all stages of medication delivery. His or her leadership and expertise optimize best practices and address medication adverse events in a systems-based approach.
Mindfulness reflects an organizational and/or team ability to motivate and enculturate abilities and opportunities to create awareness of the myriad of facets affecting detection of potential or emergent situations before they unfold to prevent escalation into failure and provide understanding to coordinate a response during an incident. This can be accomplished through initiatives that involve multidisciplinary work and develop teams and relationships. The concept aligns with the core components of high reliability as defined by Weick/Sutcliff.
Related terms: high reliability organizations; situational awareness
See Primer. Misdiagnosis in the context of patient safety is an erroneous or delayed diagnosis and has the potential to cause patient harm. The term is frequently used interchangeably with "diagnostic error". Misdiagnoses can potentially prevent or delay appropriate treatment or result in unnecessary or harmful treatment, which can lead to physical, psychological, or financial harm to patients. Misdiagnosis can be caused by cognitive biases in clinicians or underlying systems-level issues in health care.
Related Resources (1)
See Primer. Missed care is a subset of the category known as “error of omission.” It refers to care that is delayed, partially completed, or not completed at all. Missed care can result in lower safety culture ratings, increases in adverse events such as pressure injuries, and higher rates of postoperative mortality.
Related Resources (1)
In some contexts, errors are dichotomized as slips or mistakes, based on the cognitive psychology of task-oriented behavior. Mistakes reflect failures during attentional behaviors; behavior that requires conscious thought, analysis, and planning, as in active problem solving. Rather than lapses in concentration (as with slips), mistakes typically involve insufficient knowledge, failure to correctly interpret available information, or application of the wrong cognitive heuristic or rule. Thus, choosing the wrong diagnostic test or ordering a suboptimal medication for a given condition represents a mistake. Mistakes often reflect lack of experience or insufficient training. Reducing the likelihood of mistakes typically requires more training, supervision, or occasionally disciplinary action (in the case of negligence).
Unfortunately, health care has typically responded to all errors as if they were mistakes, with remedial education and/or added layers of supervision. In point of fact, most errors are actually slips, which are failures of schematic behavior that occur due to fatigue, stress, or emotional distractions, and are prevented through sharply different mechanisms.
In healthcare, moral distress or moral injury occurs when a person knows the ethically appropriate action to take but is constrained from taking that action. The constraints can come from multiple external factors, but they can also come from institutional or organizational regulations that do not align with the person’s moral principles, or when the person feels powerless to act on their moral beliefs.
See Primer. An event or situation that did not produce patient injury, but only because of chance. This good fortune might reflect robustness of the patient (e.g., a patient with penicillin allergy receives penicillin, but has no reaction) or a fortuitous, timely intervention (e.g., a nurse happens to realize that a physician wrote an order in the wrong chart). This definition is identical to that for close call.
Related Resources (1)
See Primer. The list of never events has expanded over time to include adverse events that are unambiguous, serious, and usually preventable. While most are rare, when never events occur, they are devastating to patients and indicate serious underlying organizational safety problems.
Related Resources (2)
Though less often cited than high reliability theory in the health care literature, normal accident theory has played a prominent role in the study of complex organizations. In contrast to the optimism of high reliability theory, normal accident theory suggests that, at least in some settings, major accidents become inevitable and, thus, in a sense, "normal."
Perrow proposed two factors that create an environment in which a major accident becomes increasingly likely over time: complexity and tight coupling. The degree of complexity envisioned by Perrow occurs when no single operator can immediately foresee the consequences of a given action in the system. Tight coupling occurs when processes are intrinsically time-dependent once a process has been set in motion; it must be completed within a certain period of time. Importantly, normal accident theory contends that accidents become inevitable in complex, tightly coupled systems regardless of steps taken to increase safety. In fact, these steps sometimes increase the risk for future accidents through unintended collateral effects and general increases in system complexity.
Even if one does not believe the central contention of normal accident theory that the potential for catastrophe emerges as an intrinsic property of certain complex systems, analyses informed by this theory's perspective have offered some fascinating insights into possible failure modes for high-risk organizations, including hospitals.
Normalization of deviance was coined by Diane Vaughan in her book The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, in which she analyzes the interactions between various cultural forces within NASA that contributed to the Challenger disaster. Vaughn used this expression to describe the gradual shift in what is regarded as normal after repeated exposures to "deviant behavior" (behavior straying from correct [or safe] operating procedure). Corners get cut, safety checks bypassed, and alarms ignored or turned off, and these behaviors become normal not just common, but stripped of their significance as warnings of impending danger. In their discussion of a catastrophic error in health care, Chassin and Becher used the phrase "a culture of low expectations." When a system routinely produces errors (paperwork in the wrong chart, major miscommunications between different members of a given health care team, patients in the dark about important aspects of the care), providers in the system become inured to malfunction. In such a system, what should be regarded as a major warning of impending danger is ignored as a normal operating procedure.
The onion model illustrates the multiple levels or layers of protection (as in the layers of an onion) in a complex, high-risk system such as any health care setting. These layers include external regulations (e.g., related to staffing levels or required organizational practices, such as medication reconciliation), organizational features such as a just culture, equipment and technology (e.g., computerized order entry), and education and training of personnel. An illustration of a modified version of the onion model can be found here.
Organizational learning is an environmental state that ensures lessons from lived experience within a work environment are fed into, and embedded within, the organization’s policies and culture to ensure perpetual improvement. Activities supporting organizational learning include detection and reporting and discussion of safety issues by frontline staff, and promotion of experimentation and creative problem-solving in order to minimize the stigma of failures.
See Primer. Overdiagnosis involves identifying medical issues in people that were not going to be medically significant or cause harm. It may occur due to unnecessary screening of asymptomatic people, unneeded investigations in individuals with symptoms, or inappropriate reliance on laboratory or radiographic studies. Overdiagnosis can cause more harm than benefit. It can lead to unnecessary testing and treatment that ultimately adversely affects patient safety and well-being.
Related Resources (1)
See Primer. The vast majority of health care takes place in the outpatient, or ambulatory, setting, and a growing body of research has identified and characterized factors that influence safety in office practice, the types of errors commonly encountered in ambulatory care, and potential strategies for improving ambulatory safety.
Related Resources (1)
Originally created by the Agency for Healthcare Quality and Research (AHRQ), the Patient Safety Indicators (PSIs) reflect the quality of inpatient care as well as the rate of preventable complications and iatrogenic events.
Related Resources (1)
Patient Safety Officers are individuals assigned to lead patient safety efforts in health care organizations, and who are responsible for the management of the patient safety program. They are accountable for assessing the organization’s patient safety measures, ensuring staff are trained, promoting actions to identify and respond to patient safety events, and ensuring that senior leadership is knowledgeable about the status of the patient safety events and overall status of the program.
Patient Safety Organizations (PSOs) were established through the Patient Safety and Quality Improvement Act that authorized the Department of Health and Human Services (HHS) to establish a voluntary system of reporting and analyzing data to evaluate and improve patient safety. PSOs work with healthcare providers (e.g., hospitals, nursing homes, dialysis centers) to assist them with their patient safety programs by analyzing the data submitted and providing feedback on ways to improve patient safety. AHRQ is the agency responsible for the oversight of the PSO program.
Performance can be defined in terms of patient outcomes but is more commonly defined in terms of processes of care (e.g., the percentage of eligible diabetics who have been referred for annual retinal examinations, the percentage of children who have received immunizations appropriate for their age, patients admitted to the hospital with pneumonia who receive antibiotics within 6 hours). Pay-for-performance initiatives reflect the efforts of purchasers of health care—from the federal government to private insurers—to use their purchasing power to encourage providers to develop whatever specific quality improvement initiatives are required to achieve the specified targets. Thus, rather than committing to a specific quality improvement strategy, such as a new information system or a disease management program, which may have variable success in different institutions, pay for performance creates a climate in which provider groups will be strongly incentivized to find whatever solutions will work for them.
See Primer. Long and unpredictable work hours have been a staple of medical training for centuries. However, little attention was paid to the patient safety effects of fatigue among residents until March 1984, when Libby Zion died due to a medication-prescribing error while under the care of residents in the midst of a 36-hour shift. In 2003, the Accreditation Council for Graduate Medical Education (ACGME) implemented new rules limiting work hours for all residents, with the key components being that residents should work no more than 80 hours per week or 24 consecutive hours on duty, should not be "on-call" more than every third night, and should have 1 day off per week.
Related Resources (1)
Commonly referred to as PDSA, refers to the cycle of activities advocated for achieving process or system improvement. The cycle was first proposed by Walter Shewhart, one of the pioneers of statistical process control (see run charts) and popularized by his student, quality expert W. Edwards Deming. The PDSA cycle represents one of the cornerstones of continuous quality improvement (CQI). The components of the cycle are briefly described below:
- Plan: Analyze the problem you intend to improve and devise a plan to correct the problem.
- Do: Carry out the plan (preferably as a pilot project to avoid major investments of time or money in unsuccessful efforts).
- Study: Did the planned action succeed in solving the problem? If not, what went wrong? If partial success was achieved, how could the plan be refined?
- Act: Adopt the change piloted above as is, abandon it as a complete failure, or modify it and run through the cycle again. Regardless of which action is taken, the PDSA cycle continues, either with the same problem or a new one.
PDSA can seem like a simple way to tackle quality problems. In practice, though, many omit key steps or do not perform sufficient cycles. PDSA aims to foster rapid change, with frequent tests of improvement, so relying on, for example, quarterly data to assess the effects of the efforts to date is usually not adequate. Another way in which practice deviates from theory for PDSA is the way in which the cycles play out. PDSA cycles are typically depicted as a smooth progression, with each cycle seamlessly and iteratively building on the previous. As the number of cycles increases, their effectiveness and overall cumulative effect strengthens. In practice, this type of work involves frequent false starts, backtracking, regroupings, backsliding, and overlapping scenarios within the process. Well-executed PDSA cycles in practice involve a more complex tangle of related improvement efforts talking different aspects of the target problem.
Related Resources (1)
Preventability in the context of patient safety is the extent to which a patient safety adverse event or harm is preventable. Preventable adverse events occur because of an error or failure to apply strategies for error prevention. One in 10 patients are harmed while receiving inpatient care in hospitals and four in 10 patients are harmed in primary and outpatient care. This harm is caused by a range of adverse events, and 50%-80% of these events are preventable. In terms of prevalence, preventable patient safety events are most frequently related to diagnosis, prescription, or medication delivery processes.
Related Resources (1)
In health care, production pressure refers to delivery of services—the pressure to run hospitals at 100% capacity, with each bed filled with the sickest possible patients who are discharged at the first sign that they are stable, or the pressure to leave no operating room unused and to keep moving through the schedule for each room as fast as possible. In a survey of anesthesiologists, half of respondents stated that they had witnessed at least one case in which production pressure resulted in what they regarded as unsafe care. Examples included elective surgery in patients without adequate preoperative evaluation and proceeding with surgery despite significant contraindications.
Production pressure produces an organizational culture in which frontline personnel (and often managers) are reluctant to suggest any course of action that compromises productivity, even temporarily. For instance, in the survey of anesthesiologists, respondents reported pressure by surgeons to avoid delaying cases through additional patient evaluation or canceling cases, even when patients had clear contraindications to surgery.
Psychological safety is the belief that speaking up will not result in negative consequences for oneself, such as punishment or humiliation. Psychological safety within health care teams fosters patient safety by allowing team members to feel accepted, respected, and able to share their ideas, questions, concerns and mistakes.
See Primer. Rapid response teams represent an intuitively simple concept: when a patient demonstrates signs of imminent clinical deterioration, a team of providers is summoned to the bedside to immediately assess and treat the patient with the goal of preventing adverse clinical outcomes.
Related Resources (1)
Because mistaken substitution or reversal of alphanumeric information is such a potential hazard, read-back protocols typically include the use of phonetic alphabets, such as the NATO system ("Alpha-Bravo-Charlie-Delta-Echo...X-ray-Yankee-Zulu") now familiar to many. In health care, traditionally, read-back has been mandatory only in the context of checking to ensure accurate identification of recipients of blood transfusions. However, there are many other circumstances in which health care teams could benefit from following such protocols, for example, when communicating key lab results or patient orders over the phone, and even when exchanging information in person (e.g., handoffs).
An example of a red rule in health care might be the following: "No hospitalized patient can undergo a test of any kind, receive a medication or blood product, or undergo a procedure if they are not wearing an identification bracelet." The implication of designating this a red rule is that the moment a patient is identified as not meeting this condition, all activity must cease in order to verify the patient's identity and supply an identification band.
Health care organizations already have numerous rules and policies that call for strict adherence. The reason that some organizations are using red rules is that, unlike many standard rules, red rules will always be supported by the entire organization. In other words, when someone at the frontline calls for work to cease on the basis of a red rule, top management must always support this decision. Thus, when properly implemented, red rules should foster a culture of safety, as frontline workers will know that they can stop the line when they notice potential hazards, even when doing so may result in considerable inconvenience or be time consuming and costly, for their immediate supervisors or the organization as a whole.
Resilience is a characteristic that enables organizations to adapt to uncertain conditions in their work environment. Resilient organizations are able to anticipate risk and continuously adapt to the complexity of their work environments to prevent failure. While important, personal resilience is not the focus of this definition, but resilience as an organizational trait helps to minimize the overreliance on individual resilience through strengthening the organizational capacity to minimize disruption.
Related term: Resilience Engineering
Resilience engineering is the organizational capability to design processes and actions to systemically track data, information, evidence, and knowledge to anticipate and respond to challenges, as well as to correct disrupted processes back to standardized, improved states based on the application of lessons learned during the disruption. Processes are then hardwired to incorporate those changes and support continuous adjustment to sustain said improvements–in essence to learn from disruptions–to prevent future problems and failure and become resilient.
Related term: Resilience
Risk management in healthcare is a complex set of clinical and administrative systems, processes, procedures, and reporting structures designed to detect, monitor, assess, mitigate, and prevent risks to patients.
See Primer. Efforts to engage patients in safety efforts have focused on three areas: enlisting patients in detecting adverse events, empowering patients to ensure safe care, and emphasizing patient involvement as a means of improving the culture of safety.
Related Resources (1)
See Primer. Initially developed to analyze industrial accidents, root cause analysis is now widely deployed as an error analysis tool in health care. A central tenet of RCA is to identify underlying problems that increase the likelihood of errors while avoiding the trap of focusing on mistakes by individuals.
Related Resources (1)
The phrase "rule of thumb" probably has it origin with trades such as carpentry in which skilled workers could use the length of their thumb (roughly one inch from knuckle to tip) rather than more precise measuring instruments and still produce excellent results. In other words, they measured not using a "rule of wood" (old-fashioned way of saying ruler), but by a "rule of thumb."
See Primer. High-reliability organizations consistently minimize adverse events despite carrying out intrinsically hazardous work. Such organizations establish a culture of safety by maintaining a commitment to safety at all levels, from frontline providers to managers and executives.
Related Resources (1)
Safety I/II reflect two perspectives to understanding safety improvements. The Safety I approach focuses on identifying causes and contributing factors to adverse events without considering human performance. The Safety II approach considers variations in everyday performance to understand how things usually go right. Under the Safety-I framework, procedural violations in the health care setting might be viewed unfavorably. In the Safety-II framework, procedural violations may be seen as necessary modifications within a complex work environment. The application of both frameworks provides deeper understanding of procedural violations and facilitates the development of targeted interventions for improving safety.
SBAR (Situation, Background, Assessment, Recommendation) is a concise, standardized process to clearly communicate information between individuals or groups. The Situation names the safety issue, Background provides known evidence and context, Assessment states the impression for next steps, and the Recommendation includes the plan to improve or remedy the patient safety issue. SBARs have commonly been used to support situational awareness and improve handoff communications. SBARs are also used to analyze patient safety events and develop potential solutions to communicate with other stakeholders, such as hospital leadership.
See Primer. The term “second victim” refers to health care workers who are involved in medical errors and adverse events and experience emotional distress. Some patient safety researchers and advocates have raised concerns regarding the use of the term, and others suggest that its appropriateness depends on hospital culture and context.
Related Resources (1)
See Primer. An adverse event in which death or serious harm to a patient has occurred; usually used to refer to events that are not at all expected or acceptable e.g., an operation on the wrong patient or body part. The choice of the word sentinel reflects the egregiousness of the injury (e.g., amputation of the wrong leg) and the likelihood that investigation of such events will reveal serious problems in current policies or procedures.
Related Resources (2)
The sharp end refers to the personnel or parts of the health care system in direct contact with patients. Personnel operating at the sharp end may literally be holding a scalpel (e.g., an orthopedist who operates on the wrong leg) or figuratively be administering any kind of therapy (e.g., a nurse programming an intravenous pump) or performing any aspect of care. To complete the metaphor, the blunt end refers to the many layers of the health care system that affect the scalpels, pills, and medical devices, or the personnel wielding, administering, and operating them. Thus, an error in programming an intravenous pump would represent a problem at the sharp end, while the institution's decision to use multiple types of infusion pumps (making programming errors more likely) would represent a problem at the blunt end. The terminology of "sharp" and "blunt" ends corresponds roughly to active failures and latent conditions.
See Primer. The term "signout" is used to refer to the act of transmitting information about the patient. Handoffs and signouts have been linked to adverse clinical events in settings ranging from the emergency department to the intensive care unit.
Related Resources (1)
Six sigma refers loosely to striving for near perfection in the performance of a process or production of a product. The name derives from the Greek letter sigma, often used to refer to the standard deviation of a normal distribution. By definition, 95% of a normally distributed population falls within 2 standard deviations of the average (or "2 sigma"). This leaves 5% of observations as "abnormal" or "unacceptable." Six Sigma targets a defect rate of 3.4 per million opportunities 6 standard deviations from the population average.
When it comes to industrial performance, having 5% of a product fall outside the desired specifications would represent an unacceptably high defect rate. What company could stay in business if 5% of its product did not perform well? For example, would we tolerate a pharmaceutical company that produced pills containing incorrect dosages 5% of the time? Certainly not. But when it comes to clinical performance the number of patients who receive a proven medication, the number of patients who develop complications from a procedure we routinely accept failure or defect rates in the 2% to 5% range, orders of magnitude below Six Sigma performance.
Not every process in health care requires such near-perfect performance. In fact, one of the lessons of Reason's Swiss cheese model is the extent to which low overall error rates are possible even when individual components have many "holes." However, many high-stakes processes are far less forgiving, since a single "defect" can lead to catastrophe (e.g., wrong-site surgery, accidental administration of concentrated potassium).
Errors can be dichotomized as slips or mistakes, based on the cognitive psychology of task-oriented behavior. Slips refer to failures of schematic behaviors, or lapses in concentration (e.g., overlooking a step in a routine task due to a lapse in memory, an experienced surgeon nicking an adjacent organ during an operation due to a momentary lapse in concentration). Slips occur in the face of competing sensory or emotional distractions, fatigue, and stress. Reducing the risk of slips requires attention to the designs of protocols, devices, and work environments using checklists so key steps will not be omitted, reducing fatigue among personnel (or shifting high-risk work away from personnel who have been working extended hours), removing unnecessary variation in the design of key devices, eliminating distractions (e.g., phones) from areas where work requires intense concentration, and other redesign strategies. Slips can be contrasted with mistakes, which are failures that occur in attentional behavior such as active problem solving.
Stewardship refers to efforts by healthcare providers (e.g., clinicians, hospitals, doctor’s offices, pharmacies, etc.) to promote the safe and appropriate use of healthcare resources. Recent stewardship priorities have focused on appropriate use of opioids and antimicrobials. The concept of “stewardship” was first introduced by the World Health Organization (WHO) to clarify the practical components of governance in the health sector; their focus was on how governments take responsibility for the health system and the wellbeing of the population, fulfill health system functions, assure equity, and coordinate interaction with government and society.
Most definitions of quality emphasize favorable patient outcomes as the gold standard for assessing quality. In practice, however, one would like to detect quality problems without waiting for poor outcomes to develop in such sufficient numbers that deviations from expected rates of morbidity and mortality can be detected. Donabedian first proposed that quality could be measured using aspects of care with proven relationships to desirable patient outcomes. For instance, if proven diagnostic and therapeutic strategies are monitored, quality problems can be detected long before demonstrable poor outcomes occur.
Aspects of care with proven connections to patient outcomes fall into two general categories: process and structure. Processes encompass all that is done to patients in terms of diagnosis, treatment, monitoring, and counseling. Cardiovascular care provides classic examples of the use of process measures to assess quality. Given the known benefits of aspirin and beta-blockers for patients with myocardial infarction, the quality of care for patients with myocardial infarction can be measured in terms of the rates at which eligible patients receive these proven therapies. The percentage of eligible women who undergo mammography at appropriate intervals would provide a process-based measure for quality of preventive care for women.
Structure refers to the setting in which care occurs and the capacity of that setting to produce quality. Traditional examples of structural measures related to quality include credentials, patient volume, and academic affiliation. More recent structural measures include the adoption of organizational models for inpatient care (e.g., closed intensive care units and dedicated stroke units) and possibly the presence of sophisticated clinical information systems. Cardiovascular care provides another classic example of structural measures of quality. Numerous studies have shown that institutions that perform more cardiac surgeries and invasive cardiology procedures achieve better outcomes than institutions that see fewer patients. Given these data, patient volume represents a structural measure of quality of care for patients undergoing cardiac procedures.
In the model, each slice of cheese represents a safety barrier or precaution relevant to a particular hazard. For example, if the hazard were wrong-site surgery, slices of the cheese might include conventions for identifying sidedness on radiology tests, a protocol for signing the correct site when the surgeon and patient first meet, and a second protocol for reviewing the medical record and checking the previously marked site in the operating room. Many more layers exist. The point is that no single barrier is foolproof. They each have "holes"; hence, the Swiss cheese. For some serious events (e.g., operating on the wrong site or wrong person), even though the holes will align infrequently, even rare cases of harm (errors making it "through the cheese") will be unacceptable.
While the model may convey the impression that the slices of cheese and the location of their respective holes are independent, this may not be the case. For instance, in an emergency situation, all three of the surgical identification safety checks mentioned above may fail or be bypassed. The surgeon may meet the patient for the first time in the operating room. A hurried x-ray technologist might mislabel a film (or simply hang it backwards and a hurried surgeon not notice), "signing the site" may not take place at all (e.g., if the patient is unconscious) or, if it takes place, be rushed and offer no real protection. In the technical parlance of accident analysis, the different barriers may have a common failure mode, in which several protections are lost at once (i.e., several layers of the cheese line up).
In health care, such failure modes, in which slices of the cheese line up more often than one would expect if the location of their holes were independent of each other (and certainly more often than wings fly off airplanes) occur distressingly commonly. In fact, many of the systems problems discussed by Reason and others—poorly designed work schedules, lack of teamwork, variations in the design of important equipment between and even within institutions—are sufficiently common that many of the slices of cheese already have their holes aligned. In such cases, one slice of cheese may be all that is left between the patient and significant hazard.
See Primer. Medicine has traditionally treated quality problems and errors as failings on the part of individual providers, perhaps reflecting inadequate knowledge or skill levels. The systems approach, by contrast, takes the view that most errors reflect predictable human failings in the context of poorly designed systems (e.g., expected lapses in human vigilance in the face of long work hours or predictable mistakes on the part of relatively inexperienced personnel faced with cognitively complex situations). Rather than focusing corrective efforts on reprimanding individuals or pursuing remedial education, the systems approach seeks to identify situations or factors likely to give rise to human error and implement systems changes that will reduce their occurrence or minimize their impact on patients. This view holds that efforts to catch human errors before they occur or block them from causing harm will ultimately be more fruitful than ones that seek to somehow create flawless providers.
This systems focus includes paying attention to human factors engineering (or ergonomics), including the design of protocols, schedules, and other factors that are routinely addressed in other high-risk industries but have traditionally been ignored in medicine.
Related Resources (1)
Teams are groups of individuals who work dynamically, interdependently, and collaboratively towards a common goal, while retaining specific individual roles or functions. Team members should (1) include anyone involved in the patient care process (including leaders), (2) have clearly defined roles and responsibilities, (3) be accountable to the team for their actions and (4) stay continually informed for effective team functioning. It is important that teams are representative not only of different professions but reflect diversity in the team members (sex, age, race, ethnicity, culture, etc.) so that the team is representative of the population they serve. AHRQ has developed an evidence-based program called TeamSTEPPS® that provides tools for healthcare teams in different types of organizations, particularly focusing on improved communication.
See Primer. Providing safe health care depends on highly trained individuals with disparate roles and responsibilities acting together in the best interests of the patient. The need for improved teamwork has led to the application of teamwork training principles, originally developed in aviation, to a variety of health care settings.
Related Resources (1)
See Primer. The "Five Rights"—administering the Right Medication, in the Right Dose, at the Right Time, by the Right Route, to the Right Patient—are the cornerstone of traditional nursing teaching about safe medication practice.
While the Five Rights represent goals of safe medication administration, they contain no procedural detail, and thus may inadvertently perpetuate the traditional focus on individual performance rather than system improvement. Procedures for ensuring each of the Five Rights must take into account human factor and systems design issues (such as workload, ambient distractions, poor lighting, problems with wristbands, ineffective double check protocols, etc.) that can threaten or undermine even the most conscientious efforts to comply with the Five Rights. In the end, the Five Rights remain an important goal for safe medication practice, but one that may give the illusion of safety if not supported by strong policies and procedures, a system organized around modern principles of patient safety, and a robust safety culture.
Related Resources (1)
"Protected health information" (PHI) includes all medical records and other individually identifiable health information. "Individually identifiable information" includes data that explicitly linked to a patient as well as health information with data items with a reasonable potential for allowing individual identification.
HIPAA also requires providers to offer patients certain rights with respect to their information, including the right to access and copy their records and the right to request amendments to the information contained in their records.
Administrative protections specified by HIPAA to promote the above regulations and rights include requirements for a Privacy Officer and staff training regarding the protection of patients’ information.
Transition of care reflect the period of time when patients move between one health care unit to another that are in different locations and offer different levels of care
Related Resources (2)
Transparency in healthcare emphasizes providing information on healthcare quality, safety, and consumer experience with care in a reliable and understandable manner. Transparency is aimed at promoting patient safety by building trust between patients, providers, the organization, and society at large, with the goal of improved safety, informed communication, and increased knowledge. Transparency can occur at the individual level (i.e., disclosure of medical errors by clinicians to patients and families) as well as organizational levels (such as public reporting activities from CMS, AHRQ, and Leapfrog).
See Primer. Triggers are a type of clue that can be used to identify an adverse event (AE) or error. A simple example is identifying that the drug Naloxone was administered to determine whether there was an opioid overdose, which is an adverse event, if occurring in a clinical setting. Trigger tools are instruments that have been designed to identify adverse events so organizations are able to measure and track the events. Trigger tools allow healthcare organizations to identify greater numbers of AEs than happens with voluntary reporting. IHI has a Global Trigger Tool that has many different types of triggers for adverse events.
Related Resources (1)
See Primer. Signals for detecting likely adverse events. Triggers alert providers involved in patient safety activities to probable adverse events so they can review the medical record to determine if an actual or potential adverse event has occurred. For instance, if a hospitalized patient received naloxone (a drug used to reverse the effects of narcotics), the patient probably received an excessive dose of morphine or some other opiate. In the emergency department, the use of naloxone would more likely represent treatment of a self-inflected opiate overdose, so the trigger would have little value in that setting. But, among patients already admitted to hospital, a pharmacy could use the administration of naloxone as a "trigger" to investigate possible adverse drug events.
In cases in which the trigger correctly identified an adverse event, causative factors can be identified and, over time, interventions developed to reduce the frequency of particularly common causes of adverse events. The traditional use of triggers has been to efficiently identify adverse events after the fact. However, using triggers in real time has tremendous potential as a patient safety tool. In a study of real-time triggers in a single community hospital, for example, more than 1000 triggers were generated in 6 months, and approximately 25% led to physician action and would not have been recognized without the trigger.
As with any alert or alarm system, the threshold for generating triggers has to balance true and false positives. The system will lose its value if too many triggers prove to be false alarms. This concern is less relevant when triggers are used as chart review tools. In such cases, the tolerance of false alarms depends only on the availability of sufficient resources for medical record review. Reviewing four false alarms for every true adverse event might be quite reasonable in the context of an institutional safety program, but frontline providers would balk at (and eventually ignore) a trigger system that generated four false alarms for every true one.
Related Resources (1)
See Primer. Underdiagnosis involves delayed or missed diagnosis of a medical condition. It may occur through either acts of omission (e.g., overuse of inappropriate tests) or commission (e.g., underuse of appropriate tests). There may be healthcare disparities associated with underdiagnosis such missed diagnosis of acute myocardial infarction in women or missed diagnosis of depression in African American patients.
Related Resources (1)
Underuse refers to the failure to provide a health care service when it would have produced a favorable outcome for a patient. Standard examples include failures to provide appropriate preventive services to eligible patients (e.g., Pap smears, flu shots for elderly patients, screening for hypertension) and proven medications for chronic illnesses (steroid inhalers for asthmatics; aspirin, beta-blockers, and lipid-lowering agents for patients who have suffered a recent myocardial infarction).
Overuse refers to providing a process of care in circumstances where the potential for harm exceeds the potential for benefit. Prescribing an antibiotic for a viral infection like a cold, for which antibiotics are ineffective, constitutes overuse. The potential for harm includes adverse reactions to the antibiotics and increases in antibiotic resistance among bacteria in the community. Overuse can also apply to diagnostic tests and surgical procedures.
Misuse occurs when an appropriate process of care has been selected but a preventable complication occurs and the patient does not receive the full potential benefit of the service. Avoidable complications of surgery or medication use are misuse problems. A patient who suffers a rash after receiving penicillin for strep throat, despite having a known allergy to that antibiotic, is an example of misuse. A patient who develops a pneumothorax after an inexperienced operator attempted to insert a subclavian line would represent another example of misuse.
See Primer. Patient safety event reporting systems are ubiquitous in hospitals and are a mainstay of efforts to detect safety and quality problems. However, while event reports may highlight specific safety concerns, they do not provide insights into the epidemiology of safety problems.
Related Resources (1)
From a definitional point of view, it does not matter if frontline users are justified in working around a given policy or equipment design feature. What does matter is that the motivation for a workaround lies in getting work done, not laziness or whim. Thus, the appropriate response by managers to the existence of a workaround should not consist of reflexively reminding staff about the policy and restating the importance of following it. Rather, workarounds should trigger assessment of workflow and the various competing demands for the time of frontline personnel. In busy clinical areas where efficiency is paramount, managers can expect workarounds to arise whenever policies create added tasks for frontline personnel, especially when the extra work is out of proportion to the perceived importance of the safety goal.
See Primer. Few medical errors are as terrifying as those that involve patients who have undergone surgery on the wrong body part, undergone the incorrect procedure, or had a procedure intended for another patient. These "wrong-site, wrong-procedure, wrong-patient errors" (WSPEs) are rightly termed never events.
Related Resources (1)