Error analysis may involve retrospective investigations (as in Root Cause Analysis) or prospective attempts to predict "error modes." Different frameworks exist for predicting possible errors. One commonly used approach is failure mode and effect analysis (FMEA), in which the likelihood of a particular process failure is combined with an estimate of the relative impact of that error to produce a "criticality index." By combining the probability of failure with the consequences of failure, this index allows for the prioritization of specific processes as quality improvement targets. For instance, an FMEA analysis of the medication dispensing process on a general hospital ward might break down all steps from receipt of orders in the central pharmacy to filling automated dispensing machines by pharmacy technicians. Each step in this process would be assigned a probability of failure and an impact score, so that all steps could be ranked according to the product of these two numbers. Steps ranked at the top (ie, those with the highest "criticality indices") would be prioritized for error proofing.
A common process used to prospectively identify error risk within a particular process. FMEA begins with a complete process mapping that identifies all the steps that must occur for a given process to occur (e.g., programming an infusion pump or preparing an intravenous medication in the pharmacy). With the process mapped out, the FMEA then continues by identifying the ways in which each step can go wrong (i.e., the failure modes for each step), the probability that each error will be detected (i.e., so that it can be corrected before causing harm), and the consequences or impact of the error not being detected. The estimates of the likelihood of a particular process failure, the chance of detecting such failure, and its impact are combined numerically to produce a criticality index.
This criticality index provides a rough quantitative estimate of the magnitude of hazard posed by each step in a high-risk process. Assigning a criticality index to each step allows prioritization of targets for improvement. For instance, an FMEA analysis of the medication-dispensing process on a general hospital ward might break down all steps from receipt of orders in the central pharmacy to filling automated dispensing machines by pharmacy technicians. Each step in this process would be assigned a probability of failure and an impact score, so that all steps could be ranked according to the product of these two numbers. Steps ranked at the top (i.e., those with the highest criticality indices) would be prioritized for error proofing.
FMEA makes sense as a general approach and it (or similar prospective error-proofing techniques) has been used in other high-risk industries. However, the reliability of the technique is not clear. Different teams charged with analyzing the same process may identify different steps in the process, assign different risks to the steps, and consequently prioritize different targets for improvement.
Failure to rescue is shorthand for failure to rescue (i.e., prevent a clinically important deterioration, such as death or permanent disability) from a complication of an underlying illness (e.g., cardiac arrest in a patient with acute myocardial infarction) or a complication of medical care (e.g., major hemorrhage after thrombolysis for acute myocardial infarction). Failure to rescue thus provides a measure of the degree to which providers responded to adverse occurrences (e.g., hospital-acquired infections, cardiac arrest or shock) that developed on their watch. It may reflect the quality of monitoring, the effectiveness of actions taken once early complications are recognized, or both.
The technical motivation for using failure to rescue to evaluate the quality of care stems from the concern that some institutions might document adverse occurrences more assiduously than other institutions. Therefore, using lower rates of in-hospital complications by themselves may simply reward hospitals with poor documentation. However, if the medical record indicates that a complication has occurred, the response to that complication should provide an indicator of the quality of care that is less susceptible to charting bias.
An aspect of a design that prevents a target action from being performed or allows its performance only if another specific action is performed first. For example, automobiles are now designed so that the driver cannot shift into reverse without first putting her foot on the brake pedal. Forcing functions need not involve device design. For instance, one of the first forcing functions identified in health care is the removal of concentrated potassium from general hospital wards. This action is intended to prevent the inadvertent preparation of intravenous solutions with concentrated potassium, an error that has produced small but consistent numbers of deaths for many years.
The "Five Rights"—administering the Right Medication, in the Right Dose, at the Right Time, by the Right Route, to the Right Patient—are the cornerstone of traditional nursing teaching about safe medication practice.
While the Five Rights represent goals of safe medication administration, they contain no procedural detail, and thus may inadvertently perpetuate the traditional focus on individual performance rather than system improvement. Procedures for ensuring each of the Five Rights must take into account human factor and systems design issues (such as workload, ambient distractions, poor lighting, problems with wristbands, ineffective double check protocols, etc.) that can threaten or undermine even the most conscientious efforts to comply with the Five Rights. In the end, the Five Rights remain an important goal for safe medication practice, but one that may give the illusion of safety if not supported by strong policies and procedures, a system organized around modern principles of patient safety, and a robust safety culture.