Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation with... Susan McGrath, PhD and George Blike, MD about Surveillance Monitoring

April 26, 2023 
Save
Print

Editor’s note: Susan McGrath is the director of the Surveillance Analytics core at Dartmouth Hitchcock Medical Center (D-H) and George Blike is the former chief quality and value officer at D-H.  We spoke to them about surveillance monitoring of patients in low-acuity units of the hospital to prevent failure to rescue events, its difference from high-acuity continuous monitoring, and its potential applications in other settings.

Sarah Mossburg: Welcome, Dr. McGrath and Dr. Blike. We’re thrilled to have you here talking with us today. Let’s start by having you tell us a little bit about yourselves and your current roles.

Susan McGrath: I’m a biomedical engineer and have worked in a number of different professional settings over my career. I started out working for the DOD [U.S. Department of Defense] and a defense contractor, and then I came to Dartmouth College in 2000 where I was a professor in the engineering school. During my time at the college, I led research that provided terrific opportunities to collaborate with clinicians at Dartmouth Hitchcock, including George Blike. In 2011, I moved over to Dartmouth Hitchcock to focus my work on improvement, quality, and patient safety systems, and I have been there ever since.

My current role is the director of the Surveillance Analytics core at Dartmouth Hitchcock, which sits in our Analytics Institute. Our role is to analyze data related to patient safety, especially monitoring of general care patients, to try to understand those systems better. I’m also an associate professor in the Department of Anesthesiology at Dartmouth Hitchcock.

George Blike: I’m an anesthesiologist. My whole career of over 33 years has been at Dartmouth Hitchcock. I became involved in human factors and systems engineering around 1988. I was mentored by a group of anesthesiologists who were very interested in technology, patient safety, and human factors. Our specialty was applying aviation safety concepts to patient safety. I had opportunities to lead quality and safety for the anesthesia department. Then, I became the first patient safety officer for the hospital, and finally I served for about a decade as the inaugural chief quality and value officer for our health system. In all those roles, I teamed up with colleagues like Sue who brought expertise in both engineering as well as human factors, and that has been gratifying.

Sarah Mossburg: Can you tell me a little bit more about the work that brought the two of you together?

George Blike: Our collaboration was enabled by our individual research backgrounds. The concept of failure to rescue was something that I got interested in 1992 when Jeff Silber published on it. I was interested in the concept of recovering and tolerating incidents and accidents rather than just trying to prevent them. As an analogy, avoiding driving drunk prevents car accidents. But we now also have antilock brakes that let you maintain control on an icy day, which is the idea of recovery. We also have an airbag. You could be drunk with bald tires on an icy day, and if you crash into a tree, the air bag could still save you. That’s an example of error tolerance, and you need all three of these safety aspects. I had heard a lot about harm prevention and patient safety, but not so much about recovery and rescue. I had been doing research aimed at improving the recognition of patient deterioration and mobilizing timely treatment when I met Sue.

Susan McGrath: My primary research interest has been in developing tools and systems that appropriately apply technology to physiologic monitoring in a variety of settings. For instance, my lab at the college was trying to understand, both for Homeland Security and DOD applications, how you could collect and use physiologic state information from people who are in the field in these high-risk environments, where there’s a many-to-one sort of care responsibility. We centered on the use of pulse oximetry because you can get a lot of valuable information from a single sensor that most people could wear in these high-risk settings. George’s role at the hospital had become more focused on patient safety systems. He came to the team and said, “I have this problem in the inpatient setting, let’s do something about it there.”

Sarah Mossburg: The two of you worked together on an AHRQ-funded patient safety learning lab (PSLL) focused on failure to rescue. Could you tell us a little bit about this PSLL and its purpose?

George Blike: The learning lab was a really great window where Sue and I had the ability to formulate ideas, perform analysis, and think about all of the components from the micro to the macro level of rescue systems in complex tertiary care hospitals. The ability to engage in this really formative process through the learning lab was pretty incredible. Currently, Sue is leading the continuation of that work, and I’m a member of the analytics team looking at surveillance and early recognition of serious but treatable complications.

Susan McGrath: To close the loop on that, George talked about failure to rescue as being a primary interest of his, and surveillance monitoring is one tactic, one way of addressing that problem. The way that it’s designed for the general care setting is very specific. We can talk about those details, but we always want to recognize that there are other elements to this problem, other ways of achieving earlier recognition of patient deterioration to prevent failure to rescue events. Our goal was always to move up the timeline of recognition and response. That’s what the failure to rescue learning lab allowed us to look at in depth.

With surveillance monitoring, we had implemented a system that had a specific purpose, and it was extremely successful at preventing deaths after a complication occurred in the inpatient setting. But we knew that we could implement other tactics to move up that timeline. Through the course of the years that we worked in the PSLL, we were able to identify what other elements we could implement using a systems approach. Not just purchasing a piece of equipment and installing them, but having goals around what the system would achieve, what are the requirements for that, how we could build and integrate the components. This is not just about technology; it’s also about processes, governance, and education and many other systems elements. It was a really beautiful opportunity for our team to look long and hard at failure to rescue as a system, and to start us down the road to look beyond surveillance monitoring to implement other tactics.

Sarah Mossburg: That’s great. Thank you so much. Could we talk more specifically about surveillance monitoring, because that is one of the pieces we’re interested in learning more about today. Before we dive into that conversation, it would be helpful if you could define surveillance monitoring for us. When you use that term, what are you trying to convey? What does that mean?

George Blike: We added this terminology because in general care, the ratios of providers to patients is around one to four or five. That is very different than in the ICU [intensive care unit] or in the operating room, where it’s one on one. So in this setting, we clinicians have EKG [electrocardiogram], pulse oximetry, blood pressure, and other physiological monitors that we are using to continuously monitor for abnormalities. We are able to respond immediately when we see significant changes.

In contrast, surveillance monitoring is much more like a safety net to back up clinicians who are managing multiple patients at once. It’s like the way your car is constantly monitoring for severe deceleration when you’re busy driving. When the car sees severe deceleration, it has the ability to determine whether it is a crash, and then pop the air bag to save the occupant. In the same way, when we deployed surveillance monitoring, we were looking to detect severe and sustained events and then mobilize a response to save the patient. A nurse in general care managing five patients may be busy with one patient when another patient down the hall gets into trouble, alone in their room, and no one knows it. All too often clinicians come into a room they hadn’t visited in a couple of hours to find their patient unresponsive, and they don’t know how long the patient has been that way.

Surveillance monitoring is about interrupting nursing staff from what they’re doing to do something else that’s more important. It is geared toward severe events, not minor stuff. It’s running in the background. It’s a safety net. It’s helping appropriately redirect attention from a nurse who’s busy over there but suddenly needs to come over here and pay attention to this patient. That’s how we differentiated it from condition-specific monitoring that most clinicians are used to.

Susan McGrath: Our institution wasn’t the first to implement continuous patient monitoring in this setting, but I think it’s arguable that our implementation is one of the most successful. One reason is because we saw that there was a difference between monitoring in the critical care setting, or in the intermediate care setting, versus the general care setting. We designed a system specifically for that setting.

George hit on a very important point, which is resource allocation. In the critical care setting, there’s a resource available to respond anytime that there’s an alarm. They’re close to the bedside all the time, and they can respond and adjust the monitoring system so that they’re really focused on that patient. That level of resource availability and focus doesn’t exist in the general care setting.

When patients are in the general care setting, the prevalence of people who are deteriorating in a way that needs attention is much lower, otherwise those patients would be in critical care. Nurses who work in the general care setting understand that that’s the case, that most of the time their patients are okay. They’re going to come in, they’re going to be discharged in a couple of days, and everything’s very likely to go very well. So, if you use the condition-specific monitoring approach in the general care setting where the prevalence of actionable deterioration is different, where clinicians’ expectations of deterioration are different, and the resources available to respond are different, you would end up getting too many alarms that can’t reasonably be responded to. If a nurse is hearing lots of alarms, and they interrupt what they are doing to go to the bedside, and the patient’s condition doesn’t require immediate intervention, then it reinforces this idea that most of the alarms aren’t important, and they are less likely to respond in the future. There has been a lot of work in human factors and cognitive science to understand and explore issues related to alarms and help system designers address them. We used that work to help us configure the surveillance system in the general care setting.

Sarah Mossburg: I read in your published research that the system uses “wide” parameters for alarming. People are used to continuous monitoring in an ICU or a more acute setting, which have tight alarm parameters, for example a pulse ox to alarm when it drops below 92% or 90%. With wider parameters, the lower alarm parameter becomes 85% or 80%, which makes clinicians used to continuous monitoring nervous and a little uncomfortable. Could you speak to that for clinicians who are used to that more traditional approach to setting alarm parameters in clinical monitoring?

Susan McGrath: The way that things were constructed in surveillance monitoring, with wider parameters for alarming, is intended for people who are multitasking, and it’s redirecting their attention. George mentioned the words severe and sustained events, and that’s really important. You have to redirect a nurse’s attention judiciously, particularly in this setting. You can’t redirect attention over and over again and expect to get a response if the alarm is not actionable. The system is set up so that when a nurse hears an alarm, they should think, “I hardly ever hear this noise when it’s not something really important. I need to go now to the patient’s bedside.” The nurse’s expectations about the patient population are an important and often overlooked part of designing monitoring systems. They have formed a cognitive model of what’s happening there based on their experience and knowledge, and the alarm system has to be tailored to understand what those expectations are, and then only call when it’s really important.

George Blike: The status quo is that most people are not continuously monitored. Thus, finding someone unresponsive when you come in to do vital signs or give a medication happens more than it should. Some literature suggests that 8% of cardiac arrests in the hospital are unwitnessed and unmonitored. And we know that some of those arrests are rescuable if you can start the rescue and resuscitation sooner. We all saw the NFL game where the football player got a good resuscitation because it happened immediately. In contrast, most hospitals do not have continuous monitoring for all patients in general care. The standard is to check vital signs intermittently every four hours. Therefore, monitoring that picks up that someone’s oxygen saturation is below 80 for more than 30 seconds and prompts immediate action, is a huge improvement.

Another aspect of the alarm problem is distraction. Distraction is traditionally described in human factors as: when people are multitasking, they’re presumably doing something important, and if you distract them with a nuisance interruption, they can make errors in the work they were doing. It’s why we give nurses a quiet place to get their medications in order for their patients. If you interrupt people when they’re doing important work, you cause problems and you cause harm. If you’re not setting up your system to interrupt only when it’s appropriate, the alarms cause more harm than good because you distract staff. For example, as an anesthesiologist, during surgery, even if the patient’s blood pressure had gone down a little bit, I wouldn’t yell at the surgeon, “Hey, the blood pressure just went to 90 over 50!” because they’re intensely focused on performing the surgery.

We set up these systems so that when it does grab the nurse’s attention, it’s something worthy of interrupting them from what they’re doing, and it’s worth the risk. It must have a high specificity that there’s something actionable that needs to be done.

Finally, there’s the literature on alarm fatigue, which gets confused with the alarm problem, because it’s similar. When people get a lot of a signal, the brain starts to filter it. If something’s going off every minute and making sounds, it no longer gets your attention. Your brain is designed to do that, to filter out noise. As a result, you won’t hear the one important alarm out of the 100 or 1,000 unimportant ones. Those numbers are not an exaggeration. Our research shows if you set up alerts at traditional ICU levels, or even a more condition-specific level, you get thousands of alarms per nurse per shift.

Susan McGrath: When people were questioning that lower alarm limit, the fear was that people would still suffer harm or die. We did a ten-year review study, looking at death and harm related to administration of opioids, and no one died in that inpatient setting from opioids when using this system. It does acts as that airbag, as George said earlier. And it keeps a balance between alarming too much and avoiding the cognitive issues that George was just referring to and risking patient harm. It helps nurses believe that the alarm means something when they hear it. There’s some balance there between the two things.

There’s also a 15-second delay after an SpO2 of 80% is reached before the audible alarm goes off, and then there’s another delay before the pager starts to go off. When we look at the distribution of vital signs within the inpatient setting, and if you alarmed immediately on those conditions without a delay, you’d be getting a lot of alarms. So, with that low SpO2 threshold and a delay that weeds out noise, we still didn’t have people who were dying from delayed rescue.

George Blike: The opioid overdose study we did is a good prototype of the harm this system can prevent. A lot of patients in the hospital are on opioids. The idea that someone could die in the hospital, with all the doctors and nurses around, from something that’s a 100% treatable with ventilation and a reversal drug like Narcan, has to be seen by everyone as avoidable. This is the idea of serious but treatable complications, and respiratory arrests due to opioids are a really useful example because no one could defend that particular harm to patients.

Susan McGrath: We know that other tactics can be used, like early warning scores and risk prediction, to identify patients who are deteriorating or are at risk for deterioration. Risk prediction is a score that tells you what might happen to a person, not what will happen. It’s based on statistics and a population, and it’s not perfect. Even the very best risk scores, if used to determine who gets continuous monitoring, would miss 15% to 20% of the population of people who will deteriorate. As far as early warning and risk scores, we do think these can be used to identify people who need to be seen more often, but we don’t think that they can replace continuous monitoring because the scores are not calculated continuously at this point. We would like to see that happen, and our work is leading in that direction. We believe that other tactics besides surveillance monitoring can be added to move up detection even earlier and bring resources to the bedside earlier, but surveillance monitoring acts as the failsafe or airbag.

Sarah Mossburg: Thank you so much. I want to emphasize the point that you’re both making that helps clinicians understand why this is so vital. You’re trying to address the problem of severe and sustained events. The lower thresholds allow you to get to those severe and sustained events, while preventing the issues from nurses on general care floors being inundated or overwhelmed with alarms. Does that essentially encapsulate what you’re saying?

George Blike: I might frame it a little differently. The issue is your system must be tuned to say, “Is this the type of situation that would trigger a rapid response team, a code team, or that level of response?” Those are the actions that you’d be willing to be interrupted for, not just someone with sleep apnea that’s mild and will self-correct. To do that, you need to design the system to detect a deviation that’s more severe and more sustained. To go back to the car analogy, you don’t want the air bag popping when someone’s just braking hard.

Sarah Mossburg: That was an excellent clarification. Thank you. I know some of your work was about integrating surveillance monitoring into workflows. Could you speak a little bit about that?

Susan McGrath: For the original system design, George brought in a human factors engineer from outside the organization. The engineer mapped the existing workflow of nurses at the bedside and thought about how the monitor could then be integrated into that workflow. We had to decide where to put the unit, where the supplies would go, how it would be configured, and what the actions and responses to alarms would be. Nurses were heavily involved in developing the system. It was a conversation and a collaboration. When the system was implemented, George and other leaders, biomedical engineering, and the manufacturer of the monitoring system, were on site for weeks. They did leadership rounding every day, they asked what nurses thought about the system, and made adjustments. For example, George wanted the oxygen saturation alarm to be lower than 80, and the nurses came back and said they’d like it to be higher.

We’ve added additional features over time. For instance, we added wireless patient sensors that collect vital signs with the monitor and enter them into the medical record with a press of a button. This function saves a lot of time to capture and record vital signs. That work was part of the failure to rescue learning lab. For all of these system changes, we always did a workshop where we would walk through how the new system would be integrated into the workflow, hear what the clinicians had to say, make adjustments accordingly, collect data as the system was implemented, and continue to improve the system as needed.

We always find things we didn’t expect—you can’t plan it all. For instance, when we switched from wired to wireless monitoring, we saw that other clinicians, like physical therapists, were noting other conditions that weren’t previously detected because the patient wasn’t being monitored when they were ambulating previously. There were a lot of these nuances and ways that people learn to use the system to their advantage and use it as a tool.

Sarah Mossburg: Could you give us some thoughts about next steps for the work?

Susan McGrath: There have been a lot of questions about implementing the system besides the alarm thresholds. Cost is another issue that is often brought up. We know that in settings without continuous monitoring, people over order cardio-telemetry. Not only is that very expensive, but also it brings unnecessary interruptions to the floor nurses. George, as chief quality officer, found that if you had appropriate ordering of telemetry, you could take the excess cost savings from avoiding inappropriate telemetry orders and essentially pay for the system that continuously monitors patients in the general care setting. We also found in our study that surveillance monitoring can reduce transfers to higher level of care. Even if a patient is transferred to a higher level of care, they stay there for less time because their condition was recognized earlier. This presents an overall savings to the hospital. So, we would argue that the system is very cost-effective from a financial perspective, in addition to the huge patient safety issues that you might have if someone suffers harm and it’s preventable.

Another thing that people talk about is whether you should use pulse oximetry versus another sensor. We believe that you can build a surveillance system with more than one type of physiologic sensor. People have done it with capnography. We prefer to use pulse oximetry as the sensor that brings resources to the bedside because with that one sensor you can get a lot of information about patient state. It’s not to say that you shouldn’t monitor other physiologic parameters like respiratory rate, but you might not want to alarm with those to bring resources to the bedside. By all means, monitor other things, but if you’re building the airbag, you should only be alarming on one or two parameters at the most. If you start adding more sensors and more alarms, then you’re going to be right back in that situation where you’ve got too many things going on for people to process, and they’re not going to be able to pay attention and respond when needed.

Sarah Mossburg: Have there been any failures of this technology to live up to the hoped-for performance? And if so, what happened and what can we learn from them to move this forward?

George Blike: I would say you have to maintain all parts of the system, including the clinician procedures, protocols, and habits. We had a system that escalated from the primary nurse, and if they didn’t respond, to the charge nurse. But with high turnover, a lot of travel nurses, and the COVID-19 pandemic, suddenly most hospitals have had to have charge nurses take assignments because of workforce shortages. This means that the charge nurse can no longer respond as a backup to the primary nurse if an alarm goes off. Be thoughtful about the fact that disruptive situations, like the pandemic, can require you to adapt some new processes.

Another issue is when travel nurses come in who weren’t onboarded to the rationale for the wide alarms, they reset the alarm settings and decide the system doesn’t work because of over-alarming. You have to keep orienting people to the concepts and purpose of surveillance monitoring because at this point it’s not a standard thing that every hospital has. We know that is a vulnerability: new people not understanding the system, its rationale, and how to use it optimally.

Susan McGrath: That points back to the idea that it’s a system and has various components, not just the technology. If you perturb the system, you’re going to get a different output. Having feedback from performance measurement helps you understand what’s going on with the system and address those issues quickly.

It’s also very important to start with a good signal. If you are not measuring physiology well with a highly reliable device, then you’re not going to be able to improve that down the line as the system processes the sensor data. We have a very highly reliable pulse oximetry system and continue to monitor its performance over time, so we can be sure that from the minute it’s on the patient, we’re getting a really reliable, high-quality signal.

George Blike: Related to that, we’ve done some work to help optimize fetal heart rate monitoring systems, trying to pick up fetal distress with women in labor. Also, in monitoring neonates who are sent home, but still have some risk for respiratory and cardiac issues. These are places where there’s a population where you need to figure out who needs attention and pick up more severe events that are easy to miss if you’re just not watching.

Sarah Mossburg: Thank you both so much. Before we close, is there anything that we didn’t discuss that you would like to just briefly mention?

Susan McGrath: I think the idea of having engineers be involved in healthcare system design is very important. If you’re going to focus on systems engineering, you should probably have systems engineers leading the approach. Historically speaking, in healthcare we frequently have systems designed or implemented by people with no design or systems engineering experience, often with less-than-optimal results. AHRQ recognized this issue and created the patient safety leaning labs focused on integrating systems engineering approaches in healthcare. Working with systems engineers is something that George and Dartmouth-Hitchcock has done particularly well. Other organizations, such as Johns Hopkins and Mayo, have also brought in specific systems design resources. It’s important to see that systems need to be engineered and designed, and that requires special skills, and that those skills should be an integral part of healthcare organizations.

George Blike: To bring it full circle to the beginning, Sue and I had been working together on this kind of problem for over two decades. Imagine how excited we were when AHRQ came out and started to value learning labs and resources for systems engineering. We were already there, so we didn’t have any cultural barriers to running a safety learning lab predicated on applied systems engineering for complex systems like rescue. I was also positioned as an organizational leader who valued systems engineering, and I think having the culture already in place allowed us to go further faster.

Sarah Mossburg: This has been a really interesting conversation. Thank you for the time that you’ve given us today

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources