Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation With… Charles Vincent, MPhil, PhD

June 1, 2012 
Save
Print

Editor's note: Charles Vincent, MPhil, PhD, a psychologist by training, is one of the world's leading patient safety researchers. He is Director of the Imperial Centre for Patient Safety and Service Quality and the Clinical Safety Research Unit at Imperial College. He is the editor of Clinical Risk Management and author of the book Patient Safety. He is a Fellow of the Academy of Social Sciences. We spoke with him about his career and about patient safety in the United Kingdom.

Dr. Robert Wachter, Editor, AHRQ WebM&M: As you entered the field of patient safety a couple of decades ago, coming at it as a psychologist, what were the biggest surprises that you found?

Charles Vincent: I suppose the most striking thing was how little was written about it. In 1989, I wrote a BMJ paper called, "Research Into Medical Accidents: the Case of Negligence." "Negligence" was a way of saying that it seemed ridiculous that nothing had been done on this. Why were we not studying accidents in health care when we obviously were very concerned about them in every other field?

RW: How do you think safety efforts in the UK have been shaped by the National Health Service and a single-payer system?

CV: It's a difficult question to answer. Historically, in the mid-1990s the concern about litigation turned into a slightly more proactive approach looking at risk management. I know in the United States patient safety and risk management evolved separately; whereas, in the UK patient safety evolved out of looking at risk and litigation more clinically. At that time, we had a national system set up for insuring doctors in hospitals, which meant there was a national approach to appointing risk managers in hospitals. Then, the next phase was the report from the Institute of Medicine and our parallel report, An Organisation with a Memory.

The fact that the latter report was written by the Department of Health is quite important because I don't think any other country had released a government report about the scale of harm. It was around that time that the term patient safety began to be used in the UK other than in terms of risk management. That led, not immediately, but eventually to our National Patient Safety Agency. Certainly that was a product of our government-run, single-payer system. There are many other facets about what the economic incentives are and so on. In fact, the economic incentives in hospitals still work against risk management to some extent because there's a state-funded, mutual pooling system for all hospitals. So whether they pay premiums, which vary a bit, individual doctors are economically protected from the consequences of litigation to a much greater extent than in the US.

RW: In the US over the last 5 to 10 years the main economic pressure to improve has shifted from the malpractice system to a wide variety of drivers, more robust accreditation requirements, public reporting of quality and safety data, and now pay-for-performance or cuts to payments for untoward safety outcomes. What is the history in the UK over the last 5 to 10 years, and what are the parallels, differences, and efforts by the system to create "skin in the game" around safety?

CV: The most important step was the formation of a health care regulator in 2003. It's gone through various evolutions, but I was involved in the very first one called the Commission for Health Improvement. That's the explicit pressure, and the regulator above all is concerned with safety and harm. Regulation has progressively become a much more serious matter over the years, but it's not clear how effective it is and how much it really contributes to patient safety; it certainly exercises top managers and senior doctors a great deal when the regulator arrives for a visit. It's had some adverse consequences because there is an enormous amount of rather disorganized regulation from multiple perspectives in the British system. One of the adverse consequences is that hospitals have to report an enormous amount of data and provide an awful lot of stuff for the external people, whoever they may be. This tends to detract and distract from doing anything to actually improve things on the ground. I think that's a major problem for us.

RW: Is your sense that an exuberant regulatory response occurred in part because of the single-payer system?

CV: There's certainly a more politicized regulatory response because the role of the regulator is meant to be independent from the Department of Health. In practice, there can be enormous influence from above. If there's an immediate firestorm about something, the regulator is already immediately caught up in that. So the single-payer system is important in the sense that the regulation can get politicized very easily.

RW: Tell us what has happened in the UK around reporting. What is required to be reported, have reports led to action, and how is the reporting system organized?

CV: Reporting systems began around the mid 1990s. It was running for some years when An Organisation with a Memory was published. The main thing that emerged from that report and the National Patient Safety Agency was a huge reporting system. The idea was to aggregate the existing reporting systems and learn lessons nationally. There were some very good reasons for that because there could be a disaster in one part of the country almost identical to one in another part of the country, and the two would never be linked. But, this has mushroomed into an enormous system, which I think has received 9 million reports.

The National Patient Safety Agency had a somewhat checkered history, and I think has often been very unfairly maligned after doing some very good things. But, it's been closed down by the Conservative government and is in the process of folding. The actual reporting system will be looked after at Imperial College. Now what's reported in this big incident reporting system is very open. That's one of the strengths if you're running an aviation reporting system and you're just trying to capture all the unusual events. Then, obviously you want your definition very open. But, health care is different and requires a different kind of reporting. For instance, there are millions of falls reported in this database, but relatively few classic adverse events and major clinical events.

There's another form of reporting, which is rather different from this voluntary reporting system, the reporting of infections. It's now mandatory to report cases of MRSA, C. difficile, and so on. The pressure on this has increased markedly over the years, and there's been a huge national pressure on both reporting and control of infections. This is very public, very visible, and if you are running a hospital you can get tremendous heat. If you're an executive, it's perfectly possible to lose your job if you have a major, uncontained outbreak. It's one of the areas of safety where a big national system has had a major impact because there has been action right across the board.

RW: It sounds like you believe it's been a positive impact?

CV: Yes, overall. I can remember 20 years ago working with some microbiologists and people who were in despair about their complete inability to influence the awful situation they saw around them, because infection control was in the hands of a few people in a lab and was not seen at all as an organizational priority. It's a familiar safety story, but that was just one of those things that has now become unacceptable.

RW: So turning back to the former kind of reporting, the 9 million reports to the National Patient Safety Agency, the US doesn't have anything like that. Periodically, when an article or a report comes out saying we haven't made as much progress in safety as we should have, there are calls for a national reporting system. Given your experience with that broad-scale reporting, would you have any recommendations for us if we were considering going down that pathway?

CV: The critical recommendation I have would be to first think about what you would do with the information. When I first started to think about reporting, like everyone else, I thought about how do you get the reports in and what are the barriers to reports? We did some papers on this and studies in the late 1990s. But, if I was thinking about it now, with hindsight, I would start at the other end and I would think, "Well, if I had these reports, how would I analyze them? Who would analyze them, and how would I get it back into the system?" I don't think we've ever really worked that out fully, and a lot of the criticisms of the agency were that they haven't had enough impact. It's an odd sort of criticism in a way. They did what they were asked to do. The fact that reporting systems per se are not going to sort all our safety problems should not be blamed on the agencies running the reporting system.

One recommendation is to think very much about the feedback and the actions before worrying about collecting reports. Once you start thinking like that you think, "Well, how many reports do I need?" If you want to study falls on a national basis for instance, after you have 500 or 1000 falls you probably have a pretty good picture, and you need to move on to doing something about it. The second thing is, the biggest problem is that the reporting system is endlessly and still tediously confused with measurements. People worry about whether the reporting is going up or down. It's delayed measuring safety critical issues, you know, whether it's infections, falls, or central line problems or a lot of things that are measured now in a safety program. We should be looking for ways of actually counting these things and taking them out of the reporting system. If we had more safety measurements or policy measurements and we had good systems of doing that, then we might be able to make better use of reporting systems, which would then be for the unusual things, which I do see as valuable. The clinicians can just get on the phone and say, "There's something very odd happening here," or, "I don't know what's going on, this ward's in trouble." I think that's the real use for the reporting systems.

But, ours just mushroomed and in a way tried to take on—or was pushed into—trying to do things that the reporting system was never intended to do. It's a little better now, but if you and I were just trying to decide how much of the patient safety measurement system reporting should occupy, I might say, well it's useful but we might say, 2%, maybe 5% if we were getting optimistic. Whereas, I think for a long time in Britain the reporting system was seen as 70% of what patient safety was. And that is still a major problem.

RW: I'm going to turn to the role of frontline providers in safety and quality in the UK. One of the things that struck me from my time there [in London on sabbatical in 2011] was the extraordinary contribution of people like you—from nonproviders looking at safety. But, conversely it seemed to me that there was less engagement by frontline doctors and nurses in safety than what I've seen in good institutions in the US. Do you think that's true, and if so why do you think things have evolved that way?

CV: I don't really know whether it's true. Because I don't think I know the US as well as you know the UK, having been here. The people I know in the US are those engaged in safety so it's hard for me to judge, but I think I could make some guesses. What you were saying struck me as correctly, that in the US and this goes back to the Institute of Medicine and the An Organisation with a Memory, that it's been professionally led from the outset. Your big report was from the Institute of Medicine, from the professional association, from the doctors and nurses and so on, and the whole impetus has been professional and clinically led from the beginning. Whereas ours, although there are many clinicians involved, it did come out of the Department of Health, and this is a mixed blessing. On one hand that's good because you can do national programs and you can think nationally about the whole system and all of the good things about that. But on the other hand it's quite hard to then get traction and local improvement.

Although we've had some national organizations and lots of other people trying to stimulate local improvement, there's always a slight problem of "We're from head office and we're here to help you." The safety improvement stuff didn't grow enough on the ground. This is changing, and I think it's interesting that in recent years our networks of hospitals and other organizations are coming together to develop safety programs. It may be that, this is a bit of speculation, having a national agency gives a lot of profile in some sense, but it may also lead to the feeling amongst some that well, safety is being sorted out by the National Patient Safety Agency or, if you're a chief executive, we're sending our reports in and doing what we're required to do. This is again the question of feeding the national system, the national priority, which is what chief executives are driven by in the UK. This may have slowed the more basic professional and clinical realization that people are getting hurt and we ought to be doing something about it.

RW: Some piece of it sounds like it's cultural and the ownership of the problem by the frontline caregivers. It strikes me that there might be another piece, which is the inclination of a single system to be top down and to articulate and operationalize a problem with safety as a series of rules, which in some ways flies in the face of what we're coming to learn about complex systems. Do you think there's any role of that going on?

CV: Yes, I do, very much. There are two aspects. One is the idea that you can manage things entirely by top-down control. I'm by no means against that, but I think the problem is that it becomes so massively complicated; it's the cause of a lot of confusion. The second issue is where the safety comes to be seen as equivalent to whatever the dictates are from the top, so what it prioritized as only being concerned with 4-hour waits in emergency or infection control or something, so it becomes too narrow. There's no reason that clinicians shouldn't take a different view, but if you think about all the safety problems one faces in different areas, whether it's in internal medicine or in pediatrics or whatever, there are multiple problems and it's obviously completely beyond central dictate to address all of those. You must rely on professional organizations and people on the ground to start looking at safety in their own environments and looking for solutions. So I think both the extent of top-down control and the narrowing of the agenda are problematic.

RW: You said you've noticed a change in the last 2 to 3 years. What do you think is creating that?

CV: I'm not sure I'd single out one thing. I think of it as a growing realism about what's going on. Firstly, I think people realize now that incident reporting is not going to solve all our problems. There are lots more people now who have been involved in safety improvement programs; for instance the Health Foundation did a major improvement program called the Safer Patients Initiative. Although formal evaluation didn't show much change, it was very influential because a great many people were involved in it and it led them to go to IHI [Institute for Healthcare Improvement] or engage themselves in improvement, whether safety or quality or whatever. I think the professional associations have been slow. I'm not familiar enough with all of them, but they have started putting training on.

There is more on patient safety in education and training, and I think the research has taken off, and for some people that's another influence. For instance, there's been an explosion of work from groups all over the world on surgical teamwork and the relationship to safety, error, communication, and all these kinds of things. So, a clinician now can talk about being interested in safety improvement without jeopardizing their career or being seen as a bit fringe.

RW: I've emphasized the kind of single-payer structural issues how they've determined the response—helping in some ways in hindering in other ways. One of the other differences between the US and the UK is the amount of money we spend on health care. We spend 17% of our GDP. You spend a little less than 10%. Any sense of how the resource constraints have influenced the response to safety there?

CV: Well there have always been resource constraints and people saying there hasn't been enough money ever since I can remember. As you know, there's been a big increase in spending in recent years under the Labour government. I don't know if it has to do with the resource constraints, but any time our system is under pressure, which is very often, it's very hard to get spare money for doing anything about improvement. I think the default position is to do whatever the outside regulators and people demand, and then kind of keep your head down. This is a rather crude response, and it's very hard still to make an argument to senior managers and executives who control the money that you should do something simply because it would improve the care for patients.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources