
In Conversation with Lucy Savitz about Learning Health Systems for Patient Safety
Savitz LA, Sousane Z, Mossburg SE. In Conversation with Lucy Savitz about Learning Health Systems for Patient Safety. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2025.
Savitz LA, Sousane Z, Mossburg SE. In Conversation with Lucy Savitz about Learning Health Systems for Patient Safety. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2025.
Editor’s note: Dr. Lucy Savitz is a professor of health policy and management at the University of Pittsburgh School of Public Health. We spoke with her about learning systems and their impact on patient safety.
Sarah Mossburg: Welcome, Dr. Savitz. Please tell us about yourself, your current role, your interest in learning systems, and how they relate to your work.
Lucy Savitz: I have been in health services research for over 40 years. I am currently a professor in the Department of Health Policy and Management at the University of Pittsburgh School of Public Health, and I also serve as a senior advisor for the University of Pittsburgh Medical Center (UPMC) Health Plan Center for High-Value Health Care. I am also currently Chair of the Academy Health Board of Directors.
Sarah Mossburg: To level set for today’s discussion, could you talk briefly about what we mean by “learning health systems”?
Lucy Savitz: It is important for people to understand the history of learning health systems because this kind of work did not start out with that label. Earlier in healthcare delivery, people talked about change management and referenced the work of Peter Senge, which led to clinical process innovation or performance improvement in health systems. Long before we had the term “learning health system,” people like Stephen Shortell created something called the Center for Organized Delivery Systems, which was a consortium that commissioned research to improve care within their organizations.1,2 I did one study for them where I looked at the role of healthcare information technology (IT), and another where I looked at the implementation of service lines across five delivery systems.3
The publications To Err Is Human in 2000 and Crossing the Quality Chasm in 2001 really laid the groundwork and put a label on this kind of work. Crossing the Quality Chasm concluded that a high-functioning healthcare delivery system is obligated to take the artifacts of the care delivered in terms of the data and use that to continuously improve. These were followed by a series of reports by the Institute of Medicine (IOM).4 As people began doing this work and understood the importance of engaging the communities where their patients and members lived, the term “learning health system” emerged. In my opinion, the next generation of this work is going to be under the rubric of “learning health network,” where we go beyond single systems.
Sarah Mossburg: What are the typical elements of a learning health system?
Lucy Savitz: I have been working for the last 5 years with a volunteer group across the nation spearheaded by the Learning Health Community that Charles Friedman from the University of Michigan leads with Josh Rubin.5 They partnered with Academy Health, and through the Learning Health System Interest Group that I lead, we have been doing this work with 53 volunteers across the country. Everybody says they are a learning health system, but the question is, what distinguishes them? We have produced a structure with associated competencies, which we are in the process of validating. The three key pillars are leadership and governance, sociotechnical infrastructure, and improvement execution, each with associated competencies. Cross-cutting these are culture, values, and equity, which we believe need to be embedded in everything we do.
Sarah Mossburg: What types of information are typically reported through learning systems?
Lucy Savitz: It depends on who they are, what they are doing, and what their main objective is. It is important to understand how these learning health systems evolve and where they come from within an organization. In some cases, it is top-down, with senior leadership deciding to be a learning health system resourced and governed by top leaders. In other cases, it starts as a grassroots initiative, like in a cardiovascular or orthopedics clinical program. One clinical program starts, and then others in the organization may see that it’s a good idea and follow suit. It is about having the data you need to learn, make improvements, and spread and scale those improvements. The spread and scale part is very much in its novice stage. Most people do not know how to spread and scale, especially when it is outside of their own organization.
I was part of the original group that led the High-Value Healthcare Collaborative, which started in 2008 and was sunsetted in 2019.6,7 It was a group of 16 delivery systems that came together with the idea that we can learn more together than we can alone. There are some questions that we may not have enough data to answer individually, but by coming together, we can tackle some important problems that we all face and share that learning. I remember sitting in a meeting with Rob Nesse at the Mayo Clinic, and hearing him say, “We do a lot of good stuff. We just do not know how to spread and scale it.” Even to this day, that’s something that learning networks continue to struggle with. It has to do with issues around evaluation methods and whether evaluation is done in a way that allows for adaptation vs fidelity in different settings and circumstances.
Sarah Mossburg: It sounds like the issues with spreading and scaling are outside whichever group was the original impetus for the learning. Is that right?
Lucy Savitz: Not necessarily. A lot of times, you are spreading and scaling within your own organization. The organization I am in right now is a 40-hospital system with hundreds of outpatient clinical settings. Spreading across all those entities and having one standard of care is a big challenge. With all the mergers and consolidations happening in health care, this problem will continue.
Sarah Mossburg: How do learning health systems facilitate the shared learning process and collaboration within and across health care?
Lucy Savitz: They need to be willing partners. In some ways, we are using methods that allow us to better understand the issues of context and fidelity versus adaptability. It used to be that when people put something in place, the evaluators had to be at arm’s length. They never really talked to the implementation teams that were doing the improvement work. In many cases, the evaluations were designed as an afterthought, if even that. They were not designed in a way that you could say, “Of these components of a complex intervention, three must be implemented intact, exactly as-is, with high fidelity, while two of them could be adapted depending on your situation or your circumstance.” At Intermountain, I worked on the evaluation of the Mental Health Integration program. At the time, Intermountain required that all care managers had to be registered nurses (RNs). We spread and scaled this to other health systems, including those in Biloxi, Mississippi, after Hurricane Katrina. When I was helping rebuild the primary care system, they could not afford to have an RN be a care manager and were using community health workers. However, we evaluated it in a way that suggested they did not need high fidelity to that aspect of the complex intervention and could still expect to see the same kind of result.
Sarah Mossburg: The ability to understand what components of the intervention you must implement exactly as dictated and what is flexible within your health system sounds like a core issue.
Lucy Savitz: You must define the design for the evaluation on certain lines. Designs like stepped wedge allow you to observe adaptability, and mixed-methods design allows you to bring in contextual information using qualitative methods. Those are two major advancements that we have thought about in terms of creating the evidence base we need to do the kind of shared learning we expect to see in learning health systems.
Sarah Mossburg: What structures and processes do organizations need to have in place to facilitate a successful learning system?
Lucy Savitz: It depends on how big the organization is and whether the learning health system is the whole organization or just a small part. Data and analytics are critical components. If you cannot get the data, you cannot do this kind of work. You also need some kind of governance structure and visibility in the organization to demonstrate the results of what you are doing. Investment by the organization is crucial. Frontline staff often do not have the time to implement new processes, so as a researcher it is important to consider how to engage frontline staff in this process. They need to feel it is important, be able to see the benefits of it in their work, and be compensated for their time.
Sarah Mossburg: What would you say are some best practices around engaging frontline staff?
Lucy Savitz: It depends on the organization. Intermountain, where I worked for 12 years, had an advanced training program that started in the late 1990s to help all senior executives understand the benefits of having real-world evidence to inform their decision-making. This training filtered down across the staff at all levels, making it available to nurses and frontline practitioners. One thing the organization can do is to train people to help them understand why doing this kind of work is an important part of their regular work.
Sarah Mossburg: Earlier, you mentioned data and analytics as critical, and you also mentioned the importance of having real-world evidence. Could you talk a little bit more about those?
Lucy Savitz: Data and analytics are crucial to a learning health system. The data is an artifact of the care that is being delivered. The question is: How do you liberate the data that the health system is already collecting? Working in a healthcare delivery system, we collect a ton of data. We have a moral and ethical responsibility to use that data. There are newer technologies, often under the rubric of artificial intelligence (AI), which can help marshal that data in ways we have not been able to do successfully in the past. For example, AI can extract information from the notes in a medical record.
There are also a lot of predictive analytics going on, which is not new, but large language models can enhance analytic prediction.8 The thing that concerns me today with predictive analytics is the question of just because you can, should you? There is a proliferation of predictive analytic tools, and we have not validated them because most people do not take the time or have the capacity. The tools are delivered as a black box and cannot be readily validated, which is a real problem. Think about a clinician getting 15 different alerts all the time; that causes alert fatigue. We need to be smart about how we use the data analytics available to us, understand the source of the data, and understand how that will inform the analysis we are doing. Data element values often are defined differently in different functional areas within an organization because it is collected for different reasons (eg, FTE by human resources, scheduling, etc). The data may have the same labels but different definitions. A well-trained analyst will know the right place to go to get the right data for the questions you are trying to answer.
The other thing I find is that some health systems do not believe the evidence until they can recreate it with their own data and within their own patient population. It is not enough to read about it in a peer-reviewed article. It is necessary to have the analytic capability to demonstrate if it would still work within their own patient population.
We came together as the High Value Healthcare Collaborative to say we can’t answer all the questions we want to alone, but we can agree to share learning by sharing the results of the questions that we can answer. We then provide language to others to recreate it in their organization. A recent Becker’s report highlighted a new group called Longitude Health, funded by Providence Health System, Baylor Scott & White, Novant Health, and Memorial Hermann.9 These four health systems aim to work together to transform business models and increase performance around cost, quality, access and patient experience, and build equity value in delivery systems. They deal with these issues individually but are coming together and pooling resources for these endeavors, which is exactly what we did in the High Value Healthcare Collaborative. In fact, two of the members were founding members of the High Value Healthcare Collaborative.
Sarah Mossburg: These individuals likely saw the value of that endeavor and are bringing it forward.
Lucy Savitz: They understand the value of working collectively. This is a private initiative to come together as a learning network. The federal government and many of its national initiatives, like the Center for Medicare and Medicaid Innovation’s Hospital Engagement Networks and Partnership for Patients,10 also followed this learning network model. I led one of those initiatives, and the Patient-Centered Outcomes Research Institute has its learning network for the Health System Implementation Initiative. There is a paper led by Carol Lannon that looks at a maturity grid for learning networks.11
Organizations like Cincinnati Children’s Hospital have paved the way for learning networks and need to be applauded for that.12 The thing that is new and noteworthy about the learning networks that Cincinnati Children’s put together is the active engagement of patients and families. That is a really important element that we will see embraced more fully by others in the future. Most other networks do not include their direct involvement, except for maybe one patient advocate. It is very different to have total engagement.
Sarah Mossburg: AHRQ’s National Action Plan to Advance Patient Safety notes learning systems as a foundational area for total system safety. From your perspective, why are learning systems important when it comes to patient safety?
Lucy Savitz: In healthcare, there are limited resources available. We do not have the resources for each of us to independently develop innovations to prevent the same specific harms that we have been working on for decades. There is a lot of waste in the system. A learning network where people can share improvements that they’ve identified and evaluated in a way that’s believable to others for adaptation in their setting and circumstance is valuable and cost-effective.
Sarah Mossburg: What are some ways that we can evaluate the effectiveness of learning systems?
Lucy Savitz: We first need a concrete definition. What is a learning health system? And is it mature, or is it just starting? You would have to design a study where the systems were all at the same stage (e.g., what you would call mature learning health systems), so you are comparing apples to apples.
You must also think about whether they have specific goals. System-led learning networks tend to have organizational priorities, and you can evaluate the extent to which they deliver on those priorities. The challenge with evaluation is that not all learning health systems have the same priorities, which means there are a lot of challenges in thinking about whether they accomplish what they set out to do. Especially when you look at some of these private ones that come together. As long as healthcare organizations keep investing in it, that should say they are getting something out of it, right?
Sarah Mossburg: What are some ways learning health systems can improve patient safety?
Lucy Savitz: The biggest achievement in the High Value Healthcare Collaborative was when we were charged by CMS to implement the 3-hour sepsis bundle across several systems. It was an all-in bundle, so you had to do all five elements, or you got no credit for it. We thought we had these high-performing health care systems, but the data did not look like they were implementing the 3-hour sepsis bundle. When we went out in the field and did qualitative work, we found that some of the issues for not getting credit for the bundle were mechanical, while others were out of concern for the patients.
The fluid bolus requirement that you must push in a short time as part of the sepsis bundle was based on the patient’s weight. Physicians at one healthcare organization were afraid that if they pushed a weight-based fluid bolus for morbidly obese patients who had renal or congestive failure, the high fluid volume could hurt the patient. Other systems could not identify time zero electronically in their system, or they could not electronically capture the full fluid bolus. When we found out about the question of whether the fluid bolus could cause harm to their patients, we were able to pull data across the 16 delivery systems and answer the question: No, you are not harming your patients if you follow these guidelines. And so then, suddenly, it changed almost overnight because of the data-based evidence.
The other thing that was important was testing new payment models. When testing episodic bundle payment for hip and knee replacement, we looked at the 30 days prior to the surgery, the surgery intervention, and then 90 days post-surgery. We found that pre-op prep at one of our health system partners was amazing, so the surgeons from the other health system partners went there to observe what they did. Then we found another system where their time for wheels-in to wheels-out of the operating room for the surgical procedure was amazing—under 30 minutes. All the surgeons flew there to see what they were doing. There was another system that was doing best with 90 days post-op. Not everybody did everything best, but because they were in this network together, they could learn from each other and lift all boats higher. With a network, you do not have to be the best at everything. You just must establish trust with the partners and have the willingness to share what is learned.
Sarah Mossburg: That is a good example of how data collected within learning systems can be used to improve patient safety. Within learning health systems, what roles do different people play (e.g., organizational leaders, clinicians, patients/families)?
Lucy Savitz: It depends on the organization. As I mentioned earlier, governance and leadership comprise one of the three key pillars of learning health systems. Intermountain had something they called the clinical program infrastructure. The organization was structured for improvement, with nine clinical program areas. Each area had a guidance council that met regularly. Every year, they updated their clinical pathways based on evidence generated or gathered from the literature or other colleagues presenting at national meetings around the country. They built in the structure to absorb and develop the evidence needed for expertise in their clinical area, which was completely resourced by the delivery system.
Other places will have an evidence committee that covers all areas, bringing in new evidence and deploying it within the organization. It really depends on how the learning health system is constructed in the organization and what the expectations are. If it is grassroots, it is a smaller effort.
Lucy Savitz: One of the first papers I ever published was in 2000 on the lifecycle model of clinical process innovation.13 We looked across five delivery systems that were part of a network I was leading for AHRQ, the Integrated Delivery System Research Network. Within the network, some health systems were just starting out, some were on the curve and working harder, some were mature, and some were at risk. At that time, when I went to Intermountain and met with frontline staff, because so much improvement was happening, they would say, “Oh, that is the flavor of the month. We will just ignore it. It will go away.” Another point in the paper is that you can lose one pivotal person in your organization who supported that, and you can revert further back on the curve. It is not necessarily a continuous linear upward advancement. If there is a change in a CEO, for example, the priorities will shift and change.
Sarah Mossburg: For organizations that are just starting, what advice would you have for them to move forward and grow or build their learning capacity?
Lucy Savitz: For smaller entities, it is important to make sure they have visibility to senior leadership and report regularly on what they are doing. It is also essential to try to align the work they are doing with the priorities of the organization.
Additionally, being able to show value, whether that is de-implementing something ineffective or solving a headache that makes staff happier, is crucial. Employee wellness and health are significant issues. A colleague of mine at Intermountain always said, “95% of your time is to do your job, 5% is to do it better.” It is everyone’s responsibility to think about how to do our work better.
At Kaiser, we were called in to consult on an increase in surgical site infections in the orthopedics group. They had tried 52 ways to address this over the last year, and only four had any evidence behind them. I called it the spaghetti effect—they kept doing things, and it was like throwing spaghetti at the wall to see what stuck. You cannot make improvements that way. A learning network aspires to put systematic rigor around the improvement process and ensure it’s evidence-based when possible.
Sarah Mossburg: Could you talk about other specific facilitators and barriers to successful learning health systems?
Lucy Savitz: In terms of facilitators, you need investment. The most successful learning health systems have had some institutional investment behind them and alignment with the priorities of the system. It is also important for researchers to understand that they are not independent investigators focusing on their specialty or what is most interesting to them. It is about the larger priority of the health system, and how to take the tools you have as a researcher and bring them to the data that is available to address the need or problem expressed by organizational leadership.
Timing is a barrier, especially when relying on extramural funding. You can submit a grant and wait a year until you get the money. During that time, the health system probably has a new problem. Instead, you can design it so that as the evidence builds, you are constantly getting information that you can feed to managers to support their decision-making.
The environment that we are working in right now is another barrier. Being asked to do something more and different is not always received well by overwhelmed staff members. We often ask healthcare staff to collect data but may not always tell them why or what progress has been made. Feedback loops are crucial for people to understand why their work is important.
Sarah Mossburg: Could you give some examples of effective ways to operationalize that feedback?
Lucy Savitz: Run charts and statistical process control charts are probably most effective because you can follow the lines, and you don’t have to be an expert to interpret them. If the rate goes down, you have made an improvement. If it goes up, you have not.
Sarah Mossburg: What is the role of technology, such as artificial intelligence, in learning health systems?
Lucy Savitz: AcademyHealth just held an expert meeting on artificial intelligence, and one of the things I suggested to them is that there are really three buckets of artificial intelligence. There’s robotics, which has been around for a long time. There is also predictive analytics; while that has been around for a long time, there are now more commercial entities coming into play. Most of them have no healthcare experience. They give you a black box that you cannot validate. There has been a proliferation of these predictive analytic tools, and large language models will drive more development. With these large language models, there are also environmental and ethical considerations that will come into play.
Lastly, there is generative AI. A lot of major health systems are testing having AI do physician notes, for example. Routine tasks like mammography screenings and looking at all the scans to diagnose cancer are things that a computer does much more effectively than a human who gets fatigued over time.
Sarah Mossburg: What do you see as the future directions for learning health systems?
Lucy Savitz: When I think about a lot of the leading areas of patient harm, it would probably be the same list I made for you in 2003, which is sobering because we’ve put so much effort into trying to address these problems. There are demographic factors that are going to influence these areas of patient harm. With falls, for example, we have an aging population. As we transfer care from hospital-based to ambulatory to home-based, the people who end up in the hospital are the oldest, sickest, and most compromised. This may lead to more falls, not because the care has been compromised, but because there is a larger population at risk for falls. How we sort through those issues and how we design for safety will be important considerations.
Training newer staff is another consideration. There are many float staff going through our facilities who are not acquainted with our care processes. Newer, better ways to train are necessary.
Sarah Mossburg: I know we are coming to the end of our time. Is there anything we did not discuss or any question about learning systems that I did not ask that you think would be important to talk about?
Lucy Savitz: The future is about how we share and learn together. That means finding the groups that can do that and making room at the table for everybody, even if they do not have the data infrastructure. Constructing learning networks in a way that creates a seat at the table for everybody to learn is going to be important, especially across our diverse country.
Sarah Mossburg: That is a fantastic call to action. Thank you for speaking with us today.
References
- Shortell SM, Gillies RR, Anderson DA, Mitchell JB, Morgan KL. Creating organized delivery systems: the barriers and facilitators. Hosp Health Serv Adm. 1993;38(4):447-466.
- Center for Healthcare Organizational & Innovation Research. UC Berkeley Research. Accessed November 16, 2024. [Available at]
- Savitz L, Kaluzny A. (2000). Assessing the implementation of clinical process innovations: a cross-case comparison. J Healthc Manag. 45(6):366-379. [Available at]
- Institute of Medicine. Roundtable on Evidence-Based Medicine. Olsen LA, Aisner D, McGinnis JM, eds. The Learning Healthcare System: Workshop Summary. National Academies Press; 2007. [Free full text]
- Friedman C, Rubin J, Brown J, et al. Toward a science of learning systems: a research agenda for the high-functioning Learning Health System. J Am Med Inform Assoc. 2015;22(1):43-50. [Free full text]
- Taenzer A, Kinslow A, Gorman C, et al. Dissemination and implementation of evidence based best practice across the High Value Healthcare Collaborative (HVHC) using sepsis as a prototype - rapidly learning from others. eGEMS: J Electronic Health Data and Methods. 2017;5(3):5. [Free full text]
- Savitz LA, Weiss LT. A data driven approach to achieving high value healthcare. eGEMS: J Electronic Health Data and Methods. 2017;5(3):1. [Free full text]
- Benuzillo J, LA Savitz, S Evans. Improving health care with advanced analytics: practical considerations. eGEMS: J Electronic Health Data and Methods. 2019;7(1):3. [Free full text]
- Cass A. Providence, Novant, Memorial Hermann, BSWH form Longitude Health: 6 things to know. Becker’s Hospital CFO Report. October 10, 2024. [Free full text]
- Partnership for Patients and Hospital Engagement Networks: continuing forward momentum on reducing patient harm. Cms.gov. Published September 25, 2015. [Free full text]
- Lannon C, Schuler CL, Seid M, et al. A maturity grid assessment tool for learning networks. Learn Health Syst. 2020;5(2):e10232. [Free full text]
- Active Learning Health Networks. Cincinnati Children’s. Accessed November 16, 2024. [Available at]
- Savitz LA, Kaluzny AD, Kelly DL. A life cycle model of continuous clinical process innovation. J Healthc Manag. 2000;45(5):307-316.