In Conversation With… Rebecca Lawton, PhD
Editor's note: Rebecca Lawton, a Professor in the Psychology of Healthcare at the University of Leeds, is a health psychologist who conducts research on human factors and patient involvement in patient safety. She is Director of the Yorkshire and Humber Patient Safety Translational Research Centre, one of three such centers in England, funded by the National Institute for Health Research. We spoke with her about her experience with patient engagement and insights gleaned from her research.
Dr. Robert M. Wachter: The main focus of this interview is the patient role in patient safety, but I can't help but ask you about railway shunters.
Rebecca Lawton: I was lucky enough to start my career with James Reason supervising my PhD at Manchester University. At the time, British Rail had a problem in that about one in a thousand shunters were dying every year on the railway, and they were usually getting squashed between the buffers on the trains where they could get the shunting pole in place to connect the two trains together. British Rail wanted to know why shunters didn't follow the rules meant to keep them safe and to prevent these accidents. I spent 3 months training to be a shunter and then spent a lot of time in shunting cabins around the north of England, observing, interviewing, and surveying shunters about why they did work as they did. It might be considered "work-as-done" versus "work-as-imagined," in today's parlance. James Reason and I developed a classification of procedural violations—the reasons why shunters chose not to follow rules.
RW: In terms of what you eventually ended up applying in patient safety, what were the general lessons from the shunters around procedures, rules, and the way people react to them?
RL: The general lessons were very much around needing to understand safety in a complex system and understand that although individual behavior is important, contributory factors within the organizational system—culture, equipment, etc.—are key to patient safety outcomes as they are in other industries.
RW: Why aren't organizations good at coming to the understanding that they need to know the lives of the shunters to make sure that the rules make sense?
RL: Primarily because rules serve a function for organizations. When I started, British Rail had a rule book that was the length of the Bible, with hundreds of exceptions. Generally, that gives a sense to the worker that the rules aren't really meant to protect them. They're there to protect the organization, and they're not actually that important for safety. The general sense of the importance of the rules becomes eroded through having increasing numbers of them. Also seeing that in certain situations supervisors are keen for you to deviate from the rules if it means that you can be more productive and get the job done gives the wrong message about why rules are in place. I do see some of those things happening in health care, particularly in response to patient safety incidents where the immediate reaction of managers is often to write a rule to prevent that same thing happening again, based on the assumption that staff will follow that rule, it will be easy to follow that rule, and we can avoid any similar risks in the future.
RW: If you're taking the manager's world view, that seems like a sensible thing to do. You hear about something that went wrong. It feels like if there simply was a rule about it and you follow the rule, it would all be good. It would take much more time and energy to understand the way the work is done in the perspective of the workers. Those seem like such strong obstacles to creating an organization where rules were made more sensibly. How do you overcome those obstacles?
RL: One problem in health care is we tend to want to learn from each individual event and try to prevent that single event happening again. We tend not to do the more systematic learning from a number of events or trends that might point to changes needed at a system level. We could argue that if certain system-level problems occurred across 100, 200 events, then it might be justifiable in terms of significant amounts of resource that might be needed, to address these problems. One of Jim Reason's metaphors is around swatting mosquitos rather than draining swamps. That's what we do in response to incidents: We try to swat mosquitos. We try to develop quick, sensible fixes, but we're never going to get to the fundamental problems that have caused those incidents in the first place. Some problems can only be fixed at a political level. It's almost impossible for individual managers in organizations to address them themselves.
RW: Is there something unique about health care that we do this even less well than other organizations? Or are these generic issues?
RL: I think they're generic issues, but incidents in health care are very common, which means that we're doing a lot of swatting mosquitos. We're spending a lot of time on analyzing those incidents, developing the reports, and trying to develop remedial actions and action plans based on those. A lot of resource goes into dealing with individual incidents in health care environments, probably much more so than in other high-risk environments.
RW: Let's shift gears to the role of the patient in safety. Is there a natural genealogy that goes from railway shunters to understanding the predicaments of patients and their point of view?
RL: In health care, the interaction between the patient and clinicians is critical. So, it seems sensible to involve the patient. But I would probably be exaggerating if I said there was a nice linear and seamless relationship between my early work in shunting and my current work. The Yorkshire Quality and Safety Research group was lucky enough to win £2 million worth of funding several years back from the National Institute of Health Research in the UK. It was to look at what the role of patients in improving patient safety might be. We focused on how patients could provide feedback on patient safety. One of the work packages also looked at how patients could be involved as educators of undergraduate medical students.
RW: Did you have a set of hypotheses about what the role of patients in protecting themselves should be (or was) that you were trying to either prove or disprove?
RL: When we started, there was very little work in this area. Some studies in the States had looked at whether patients were able to report on patient safety incidents and whether they reported different things to what you might find in the medical notes or in standard reporting systems. There was suggestion that there were differences, and there was value in asking patients to report on safety. We also knew from the literature that patients were somewhat reluctant to directly challenge those people who were caring for them, which didn't surprise us. When we set out to explore whether patients would provide feedback, one thing we were very sensitive about was who they might be willing to provide feedback to. It soon became clear that patients felt quite uncomfortable providing feedback about the safety of their care to people who were directly caring for them.
RW: When I think about patient involvement, it has multiple aspects. One is can patients be effective observers of the environment and identify errors, and another is can patients protect themselves? Are there actions that patients can take?
RL: Patients can get involved in lots of ways, and some of them are more controversial than others. Patients have a unique perspective on the care environment and provide feedback on that, which can be really valuable. They can also be involved in education, particularly where they might have extensive experience of health care or perhaps experienced harm, they might have a role there. They can be involved in codesigning interventions, services, and new processes, such as with experience-based codesign work. Patients can also be involved in setting priorities. We work with patients to set priorities for research along with improvement and service changes.
There are also ways that patients can get involved in maintaining their own personal safety. Studies have explored challenging staff about washing their hands, marking surgical sites, or checking medications with their health care professionals. Then there's a shared role: sharing care treatment monitoring with people caring for them. For example, can patients recognize their own deterioration, and can that become part of the early warning score that helps to identify patients who are deteriorating and flag that to clinicians? Currently, we're involved in work where patients are encouraged to be more involved in their diagnostic decisions for cancer by monitoring symptoms and returning to primary care physicians earlier if symptoms change or continue for lengthy periods of time.
RW: What did you set out to try to understand better through your research and what are your findings?
RL: We probably have the strongest evidence in the patient feedback area. We learned that patients found it difficult to understand what you meant by safety incidents when we first approached them. We developed a questionnaire that asked patients about the factors that contribute to patient safety incidents using an organizational accident model. We found we could ask patients—and they would very readily respond to—questions about equipment in the work environment, whether staff seemed to be well-trained to do the job, whether staff were communicating effectively with one another, whether things were delayed, and whether they were receiving their medications in a timely manner, for example. Patients seemed very able to observe and provide feedback on those things. We developed a questionnaire that asked them about those contributory factors. We also asked whether they had any safety concerns.
The two pieces of data collection together seemed to work: the qualitative piece asking them about their concerns and this questionnaire that we called the patient measure of safety. We've tested and validated both those tools. Then we conducted a randomized control trial in which 33 wards across the Yorkshire and Humber area were given these tools, and researchers elicited feedback from patients. We randomized wards to receive this feedback or not. Then we asked the ward staff to look at the feedback and to meet as a multidisciplinary group to consider that feedback and to make action plans about how they would improve. The randomized control trial actually produced a negative result. We found that patients were very willing to provide feedback through the tools (86% response rate). Staff mostly valued the feedback, but they found it very difficult to meet as a multidisciplinary team. When they developed action plans, they found it very difficult to implement those action plans where it required resources or expertise from elsewhere within the organization. When they could do the fix themselves, they were able to do that. That was fine. But many of the solutions required pharmacy involvement, IT involvement, or senior management to approve, and they didn't find that easy to deal with. That resonates with other areas of improvement, not just improvement around patient safety feedback.
RW: In some ways, you've demonstrated that making change in complex organizations is hard. Do your results mean that hearing about a safety problem from a patient doesn't make it any more likely that you're going to be able to change things than hearing about it from an incident report?
RL: That's absolutely the case. We did wonder whether staff would take the patient feedback seriously. On two wards, we found that they were dismissive of patient feedback. But on most wards, that wasn't the issue at all. They were very keen to get the feedback, but doing something with it or making changes based on it was what they found to be the barrier.
RW: What do the negative results say about this line of work? Is this an area that deserves more focus? Or have you demonstrated that, even though it sounds like a good idea (and that, ethically, involving patients also seems good), it just doesn't work?
RL: It's not necessarily that the data from patients is any better or worse than any other measure you might take of safety. Actually using that information to change and to make improvements is the challenge. As you said, this might be measurements of staff safety culture or it might be a measure of incident reports that's used as the basis for change. It doesn't seem to be the data collection mechanism that's problematic here: it's the next step. How do we change behavior? How do we change systems on the basis of these measures? Patient safety feedback is just one of a set of measures that you might use. I don't think we should necessarily stop thinking about this. It is only one negative trial after all. Perhaps it's developing an intervention that encompasses change mechanisms more in the process of gaining and using patient feedback on safety.
RW: You can also argue there are other values to the patient feedback. Did you find that patients like doing this and does it engender trust in the system?
RL: Patients certainly liked doing it. There are some occasions where you capture issues that need to be fed back to the team immediately. That was an interesting and important function of the work. It was one of our ethical safety clauses—if we identified something that the patient said that affected their care in the moment, we would feed that back to the teams involved. But I'm sure patients wouldn't be very happy to think that they were collecting feedback that didn't get used to effect change. One thing we might consider doing is involving the patients in making the changes happen. While a member of staff might say, "It's just not possible to do that" or "I can't see how we could make that work," patients have a different perspective sometimes. "I'm sure we can make that work. There must be a way of doing this." Perhaps that's the way we go next.
RW: Let's turn to this issue of patients and their ability to protect themselves, whether it's bringing a vigilant family member or patients speaking up to their caregivers or providers when they see something that seems amiss. What have you learned about that and whether it's acceptable to patients and whether it works?
RL: Some of this is in the published literature, and some of it is what we've learned from doing work ourselves. Although patients have a unique insight, they don't see their care in siloes in the way that perhaps we deliver care. Patients are very good at seeing their care across the continuum, across transitions, and understanding where a mistake here might lead to another mistake or the exacerbation of a problem over here. They can see that whole care pathway. The problem is that some people just don't want to be involved. Some people find involvement very challenging. Some people are not able to be involved. The problem is creating potential inequalities in health if we rely on that level of involvement. We've certainly seen that some patient groups are less willing to get involved, particularly older patients, and then it's more difficult to involve older patients perhaps with dementia. Then you might rely on a family member, but what if the family member is less present, then that person might receive less safe health care. There are obviously issues and problems about relying on patients to be the safety net or the defense mechanism, although some are very willing and happy to be involved in that way.
RW: I've either heard or made two other arguments. One is that the evidence that patients feel guilty about errors and this could raise their level of guilt if something does go wrong. The second, British engineer Melinda Lyons wrote an article about the predictability of safety systems. Because it will always be unpredictable whether a patient can be involved in their safety or not, by counting on such involvement, you've added another element of chance.
RL: That's right. I wouldn't want to dismiss completely or outright patient involvement in their own protection of themselves. But it's more problematic than some other areas, and certainly there needs to be much more caution about that. Even in terms of partnership working, some patients just like that involvement and they feel able and willing to do that. Others want to be more passive recipients of care, so there's a danger of expecting patients to take on more of the responsibility.
RW: As systems become more and more digital, do you see the nature of patient engagement changing in any fundamental ways?
RL: I do, and you already see that. I don't know whether you have similar systems in the US, but we're seeing spontaneous reporting systems where patients can write stories about the care that they've received. There's one in the UK called Care Opinion. The health care organization can respond to the piece of feedback directly to the patients. If they choose, they can involve the patient in addressing the issue that they've written about. There are a number of these different sites springing up, but Care Opinion is probably one of the best known around patient experience and safety.
The other way that patients might be more involved is in the viewing and assessment of their electronic health records. People in the UK don't all have automatic access to their electronic health records. There's great potential for patients to have access to their records and to pick up on issues around medications or around test results not being received in a timely way. But again, the more educated might find it easier to understand, access, and use those electronic health records for promoting safety. Then there is the project where we are collecting patient responses about their current wellness that can then feed into the electronic warning system. That would be a way of directly using patient information to feed into an algorithm to detect deterioration. We're only at the very early stages of this work, but there is exciting potential. There are problems of course. For example, when patients are very ill, they're clearly not going to be able to provide that piece of information. At what point can you elicit that information from patients and is it actually at the point where you would most need it? There are some questions around whether that's even feasible or useful. But it's a project that we're working on.
RW: Anything that you wanted me to ask you that I didn't?
RL: One thing that perhaps is obvious but worth saying: To support patient involvement in patient safety does require a different culture of communication and an acknowledgement of uncertainty and that sometimes things do go wrong. It does require a different kind of conversation between patients and those caring for them. Also acknowledging that not everybody wants to or can be involved in patient safety.