Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation with… Edward Tenner, PhD

June 1, 2011 
Save
Print

Editor's note: Edward Tenner is an independent writer, speaker, and consultant on technology and culture. He received his PhD from the University of Chicago and has held visiting positions at Chicago, Princeton, Rutgers, the Smithsonian, and the Institute for Advanced Study, as well as a Guggenheim Fellowship. His book Why Things Bite Back: Technology and the Revenge of Unintended Consequences is a seminal work in patient safety and is generally credited with introducing the concept of unintended consequences, including those surrounding "safety fixes," to a general audience. His most recent book is Our Own Devices: The Past and Future of Body Technology. He is completing a new book on positive unintended consequences.

Dr. Robert Wachter, Editor, AHRQ WebM&M: What do you mean by the term unintended consequences?

Dr. Edward Tenner: I mean consequences that are not just disadvantageous but are actually inimical to the purpose that one originally had. A revenge effect goes beyond a side effect. It isn't only an unfortunate result of a therapy with benefits that outweigh it. It's a result that cancels out the reason for doing something.

RW: Can you give us your favorite example or two from outside of health care?

ET: One typical one that bridges health care and consumer behavior is the filter cigarette. Smoking one is really just as unhealthy as smoking an unfiltered cigarette because of the way the smoker compensates for lower nicotine by inhaling more deeply. That is a frequent theme: people tend to offset the benefits of some safety measures by behaving more dangerously. I don't agree with a few risk analysts who see this behavior as a universal law: automobile seat belts actually do reduce deaths and injuries, although the record of, for example, antilock brakes is less clear. But it's common for safety technology to backfire.

RW: So when you've written about "revenge effects"—that seatbelts or airbags cause people to go faster. First of all, are they so anticipatable that we should just now expect them? Second, how do you decide then whether the intervention is going to have a net positive effect?

ET: Trying to prevent all revenge effects has revenge effects of its own, so I don't subscribe to the strong form of the precautionary principle, that you shouldn't try anything new unless you're sure it's harmless. This can mean locking in existing revenge effects. In the case of seatbelts and other road technology, I just read in The New York Times about how the latest airbags may be more dangerous for drivers who are using their seatbelts. Airbag inflation sufficient for an unbelted person or a large person may seriously injure a person wearing a seatbelt, a child, or a smaller person.

RW: We're going to segue into health care; let me do it via aviation, which is a common theme in patient safety. You have undoubtedly seen discussions that say that the modern cockpit has become so automated that pilots are lulled into a sense of security and no one's on the till anymore. What do you think about that general philosophy—that although automation may be good, one of the unintended consequences will be that people stop paying attention?

ET: I think that risk has always been there. In fact that risk was used in the 19th century to oppose safety signals on railroads. People said that if you had these red and green lights, the engine drivers would just pay attention to the lights and they would neglect their senses. There is a problem that arguments like that have been used against all kinds of really beneficial systems that we take for granted today. I think that it's really a matter of constant training and practice to be able to work in an unsupported or semi-supported mode. It's an organizational problem more than a technological one. I don't think the existence of something that's automated necessarily leads to the abandonment of skills, although the tendency is clearly there.

RW: I'm told that TSA airport screeners are periodically shown an image of a weapon in a suitcase just to be sure that they're awake. Can you envision technological fixes built in to health care to create some almost artificial vigilance?

ET: I don't think it's necessarily a bad idea. The big problem is how far you are willing to go financially in continuing to train people and have people practice to be able to work if the system fails. We saw that in the landing in the Hudson, here was a pilot who was widely considered to be "old school," so the question was raised whether younger pilots who are coming on now would have the same ability to work with the system. Of course some people have also disputed how skilled Captain Sullenberger really was; some people are second-guessing him, saying it might have been safe to go back and land, etc. But it's clear that the public wants heroes, people who can improvise when the automated systems fail, who can somehow pull off a miracle and get the job done.

RW: Checklists are a very hot issue in health care, the idea that things that we've relied on human memory for, we're now trying to encode in a series of manageable steps. What unintended consequences should we be on the lookout for as we think about that as a new strategy?

ET: I like the idea of checklists up to a point. But I'm worried about checklists because they make the assumption that all cases are basically very similar, and that's a normal assumption for professionals. A professional really needs to work to some kind of list of best practices, but on the other hand, when people look at the very best professionals, they're the people who very often have the ability to see something that's different in this case and to investigate it further. I know that in the current medical system that is very hard to do given the workloads and systems of reimbursement, so I'm not blaming people for not being full-time problem solvers. What makes me uneasy about the checklists is the assumption that there is a body of best practices and you just follow that and then you're going to be okay. I don't doubt that in preventing surgical errors, operating on the wrong limb, and that kind of thing, that checklists are very useful. But my real question is to what level? How far should that go?

RW: In computerizing health care, we've seen that alerts designed to cue people to remember that the patient was allergic to medicine, or to do thing x in situation y, are getting ignored because the volume of alerts are such that people feel like they cannot get their work done. Are you familiar with this issue from other lines of work, and what cautions or solutions should we think of around the computerization of patient care?

ET: The false positive problem is one that I hardly have to mention to people interested in patient safety and health care issues, and it's one that also appears in many other safety contexts, for example, the frequency of alarms. One of the issues in fire alarms is that if they go off too often, then it's very hard to get residents of a campus apartment complex to take them seriously. I was in a building like that, and there were so many false alarms that I'm not sure how many people followed what Public Safety tried to get everybody to do, which was to evacuate every time. So an automated system like that does run the same risks, and I don't have a recommendation for dealing with that. But when you look at accounts of many other kinds of accidents, it often turns out that there was a signal; there was a sign warning of something, but people were used to disregarding it. There is another sociological element, and I hope it isn't widespread in medicine but it is in some other areas, that Harvard Business School professor Scott Snook calls "practical drift"—that is that organizations can start to deviate from established procedures and bend them if in their culture they feel this is really necessary to get things done. Snook was himself a victim of a friendly fire episode, and his experience made him really interested in how organizations go wrong when they are supposed to be as accurate and precise as humanly possible, and his theory emerged from his experience. So I think the combination of a high frequency of alarms and pressure for productivity is potentially dangerous.

RW: Let me turn to solutions, because it does strike me that your work is extraordinarily relevant to what we are doing in health care over the past 10 years, as we've thought about safety and quality problems. The epiphany was to approach these issues as system problems rather than manifestations of individual carelessness or sloppiness—most are good people trying hard and the systems have to be improved. As we think about improving and changing systems, are we always going to be empirically measuring these unanticipated consequences and revenge effects after we're done? Or are there ways to prospectively anticipate that something is likely to fall off the back of the truck and mitigate it before you've created the harm?

ET: It's possible to imagine the categories of problems that can arise through new technologies or new regulations. I was reading about the controversy over a robotic surgery system that requires a certain number of hours for proficiency and that can produce significantly better results if the surgeon is really experienced and adept, but also might have a greater potential for adverse outcomes if they're not fully practiced and careful. There's enough evidence from that kind of system that when it is implemented people can focus on, first of all, whom do you practice on, how do you get that experience, and then second, how much continuing education a doctor should have to use it. Many of these things can be foreseen. Some of them involve interaction between the technological system and organizational systems and the organizational ethic. There is a concept called the "high reliability organization" that you might have encountered, and that has as its exemplar the flight deck of an aircraft carrier, which turns out to be a surprisingly manageable environment despite the unbelievable apparent physical risk of it, because people have been trained and drilled so well, and are also able to interrupt regardless of rank. That's something that medicine has learned and could continue to learn from, that any member of the team, even the lowest rank, can interrupt something that the commanding officer has ordered if they see a potentially unsafe situation. The worst aviation disaster in history, at the Tenerife airport in the Canary Islands in 1987, occurred in part because the pilot was such a respected authority figure in European aviation that his subordinates didn't warn him. So there are various bodies of work, both about organizations and about particular technologies, that would let people stretch their imaginations in thinking of the possible risks of innovations, and then they could be alert to watch for them and to take action earlier rather than to wait until a larger number of events have occurred.

RW: For the physicians, nurses, quality managers who do this kind of work, who are in charge of building new systems, do they need additional training to know about these effects? Is this human factors? Is this engineering? Or do we need an extra person sitting at the table who does this for a living to raise these concerns that we might not think of? We may be too much in the middle of the soup.

ET: I'm not sure if adding another category of professional might not have some revenge effect of its own; that's one of the real problems of the field—when you think you're preventing some unintended consequence, you're actually unleashing another one that could be even worse. In the Gulf oil spill cleanup, we were aware of the damage done by some of the chemicals used against the effects of the Exxon Valdez spill. The analysis of possible unintended consequences is really dependent not on a single body of doctrine or a textbook but on experience in many examples of things going wrong in many domains. It's a form of tacit knowledge. It's something that I have to learn for myself reading into the literature of many different fields, but one of the things that I discovered is when you start following this systematically, you start developing intuitions of the kinds of things that can go wrong. To me it's a little disappointing that the advocates of new technologies under discussion aren't more sophisticated in dealing with these possible unintended consequences. I'm not saying that I think that people should just pull back and not do something new and be paralyzed because there might be some harm, but people could use their imagination more in dealing with the world's complexity.

RW: Do you worry about groupthink? As someone sitting around the table hearing the plan, you can imagine that there probably is someone thinking, "But what about this," or "Isn't this bad thing likely to happen," but then feeling like they're going to be the skunk at the party?

ET: That really depends on the leadership of the group. If there is somebody in charge of the group who really wants to push something through, they might pay lip service to possibilities, but in practice they're going to find ways around them. This seems to have been the case, for example, in the federal Minerals Management Service. In studying some of the recent mine accidents, I found there was an institutional pressure there to have the most optimistic assessment of just about everything. I don't think that they necessarily thought that they were really creating some danger. I think that it's possible for people to believe that a lot of requirements are really just paperwork—that it doesn't matter whether the systems are really adequate. The culture created at the top is more important than any number of experts that you have there. I spoke at a conference of professional safety engineers a few years ago, and these were executives who were responsible for the whole programs of companies. Their role though was really quite limited—they were not really able to say, "I don't think we should do this." Or they could say that, but they did not have the total professional autonomy to stop something in the way that the lowest ranking sailor on a flight deck can stop something.

RW: And so was their role to raise flags?

ET: Their role was to raise flags and make recommendations, and they took it seriously. My sense was that, although none of them complained about senior management or suggested to me that they were denied the ability to get things done, everything depended on the attitudes and the values of the people at the top. Everybody else in an organization will tend to follow the lead of the CEO. This is true in government, and it would be true in industry, and I suspect it would also be true in a medical setting. Hospital managers have to be especially aware of the risks of hierarchic organizations. Tokyo Electric Power Co. engineers warned of a tsunami risk several years ago, but TEPCO executives ignored their recommendations. Japan's culture made it harder to discuss the risk. So it might take a certain kind of consistent courage in all parts of organizations to face potential problems rather than to make a series of optimistic assumptions. I think the problems usually come not because people are greedy or selfish or that they don't care, but that their bias and the pressures on them induce them to take an optimistic view of everything.

RW: I think the socio-cultural phenomenon that you've observed in other industries is what we're going through in health care, and we're trying to figure out how to change it. It's a heavy lift.

ET: I can understand that, and I think there's another dimension in health care. One is patient expectations, which can cut both ways. In some cases, the patients themselves might be interested in procedures or medications that might increase the risk for them. On the other side, there is the widely discussed ability of medical professionals, much more than other professionals, to create demand for their services, and those two interact. I'm not saying that that is necessarily a bad thing; sometimes it's a good thing. It can risk life, but it can also help enhance its quality. Either way, though, it sets medicine apart.

 

 

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources