Sorry, you need to enable JavaScript to visit this website.
Skip to main content

In Conversation With...Donald A. Norman, PhD

November 1, 2006 
Save
Print

Dr. Robert Wachter, Editor, AHRQ WebM&M: Tell us a little bit about your background. How did you become interested in this kind of work?

Dr. Donald Norman: I started off in engineering, but switched to psychology for my PhD. Initially, I was interested in the classical problems in psychology (memory, attention, thinking, problem solving, language) until a colleague told me about some errors she was making, and I realized we didn't know much about errors. And so I started studying everyday human errors—"slips"—I called them.

Right about that time, Three Mile Island happened, and I got called in with a bunch of other psychologists to help retrain the operators. As we actually looked at the nuclear power plant, we realized that had you deliberately designed the operation to cause errors, you probably could not have done a better job. So that rekindled my interest in engineering and the interface between technology and people.

From there, I started looking at aviation safety, working for NASA and the military. I also did some work in medicine, mostly in anesthesiology. Then one year, I took a sabbatical in England and had so much trouble working the water faucets, light switches, and doors that I realized that the same principles I was developing for nuclear power plants, commercial aviation, and computers worked for doorknobs, light switches, and water faucets. Hence, the book, "The Design of Everyday Things," and that's how I got to where I am.

RW: What have you come to learn that engineers don't understand about psychology?

DN: Engineers—and physicians—think logically. They analyze the problem, they analyze the possible solutions, and they design a solution. And they get very upset when people don't behave logically. Now anybody who designs anything has a problem of understanding the thing they're designing too well, in that they can no longer really understand how a new person would act. That's why anyone who writes has an editor, because you cannot even proofread your own work. When I'm consulting with companies, I frequently have to explain to the engineers that they have to design for people the way they are.

RW: Is it purely a problem of the expert having trouble recognizing what novices do and don't know?

DN: Well, there are two problems. Yes, one is that your expertise blinds you to the problems that everyday people have. But second, you focus too much, and don't appreciate that all the individual elements of your work, when combined together, create a system—one that might be far more error-prone than you would have predicted from each of the individual components. For example, the anesthesiologist may review beforehand what is going to be needed. And so he or she picks up the different pieces of equipment that measure the different things, like the effects of the drugs on the patient. Each instrument actually may be designed quite well, and it may even have been rigorously tested. But each instrument works differently. Perhaps each has an alarm that goes off when something's wrong. Sounds good so far. But when you put it together as a system it's a disaster. Each alarm has a different setting, and the appropriate response to one may be incredibly dangerous for another. When things really go wrong, all the alarms are beeping and the resulting cacophony of sounds means nobody can get any work done. Instead of tending to the patient, you're spending all of your time turning off the alarms. So part of the problem is not seeing it as a system, that things have to work in context. And that these items actually should be talking to each other so that they can help the anesthesiologist prioritize the alarms.

There's another critical factor that I've recently started to explore. Equipment is often designed to be in one of two states: everything's okay versus things are really bad. That's important to know, but it is more important to know how close you are to the edge, and almost no equipment tells you that. We need to know that we're nearing that point where, if we don't react, some danger is going to happen soon. Don't give me an alarm when that happens—tell me a few minutes before so maybe I can prevent it.

RW: How do you balance that goal with the concern that we don't overload people with too many alarms and too much information?

DN: Many fields, like commercial aviation, manufacturing plants, and for that matter modern medicine, are making machines and systems more and more intelligent, to take over more and more of the load from humans. However, in all of these areas, dangerous situations can result when these systems collapse, when they make errors, or when they get into conflict with what the people are doing, since the workers no longer understand what's going on inside the machine. When you actually think about it, you realize the machines are not intelligent. The intelligence is all in the designer. What the designer tries to do is to imagine all those situations that might arise and then provide the machine with some way of coping with them, but designers cannot think of everything ahead of time. They miss a lot of critical situations. They miss the unknown. They don't understand the context. There are two things we can guarantee about these types of unexpected events: One is that they will happen, and two, when they happen they will be unexpected. And the automatic systems will not cope. And the human working with the machine has no way to figure out what is wrong and how to react.

RW: So what's the solution?

DN: First, we need automatic equipment that at least can notice when we're reaching an untenable situation and comment. Also, I believe that we can actually give a tremendous amount of this information in an unobtrusive, natural way. I'm trying to develop what I'm calling "self-explaining machines," which are always giving an indication of their state or problems they're having, so that they're not interrupting you all the time. We're used to this with mechanical machines—your vacuum cleaner or your brakes, as they go bad, begin to generate an unnatural noise, which you've come to learn means "something is wrong and I better look into this." So one idea is to try to introduce naturalistic sounds and peripheral displays into electronic devices, which give information that things are going well or not so well. One of my examples is the backup distance alarm in some modern automobiles. You back up and when there's an object behind you, the car beeps. As you get closer, the beeping rate increases. You don't have to read the manual to understand it. Nobody has to explain it to you. The first time you hear it, it just makes sense. And it doesn't get in the way either. In fact, rather than getting in the way, we come to count on it. It's supposed to be a warning, but we actually use it as an indicator of how far to go.

So there are several principles that, I believe, apply to health care. In the operating room, where a lot of these issues come up, a steady background noise might actually not be bad, if it's natural sound and it gives you some communication information, it doesn't interfere with the talking that must go on. I think it would be less intrusive than what we have today. Today, if the surgeon or the anesthesiologist wants to know what's going on, they have to take their eyes away from the patient and look up around the room actually to where the charts and displays are located.

RW: Essentially, many different data streams go into whether or not things are okay; somehow the machine would have to integrate it all together. It seems to me the backing up sound is fairly one-dimensional—I'm backing up and getting closer to a fixed obstacle. But in the OR you're talking about a lot of things going on at the same time—how do you encapsulate that in a single warning?

DN: What we need is systems thinking, first of all, and second, integrated displays that capture a large number of variables. In today's jet cockpit, there are only a couple of critical displays, two or three at most. But each display now has this wonderfully integrated picture—it is not only easy to understand, but it incorporates what might have previously been five or ten separate instruments. That is what we need in the hospital. So that you can look at it and you can see roughly where you are within the important parameters and in which way you're moving, how close you are to danger points.

RW: I think there's a general feeling that health care may be more complex than many other endeavors that have man–machine interfaces. What principles or unique attributes about the health care setting do you think we should understand better?

DN: The health care setting is particularly difficult for several different reasons. First of all, a lot of the machinery and equipment is not well designed. Second, the equipment that is well done is still not well integrated through a system analysis. We have situations in which there are two different brands of a device, both of which do the same thing but work under completely different principles. I know that nurses have a horrible time keeping track of that. It might take an hour to learn the equipment—the equipment isn't that hard, but you might have five different versions to learn.

RW: Our hospital had an error with the use of a defibrillator a while back, and we found there were a dozen different kinds of defibrillators scattered around the building.

DN: It would be far better to standardize on one brand, one model. Until the equipment manufacturers get together and develop common standards, the only way to cope is to stick with a single manufacturer. Some of the new airlines have decided that they're simply going to buy one airplane from now on. Even though another airplane might be slightly superior, the pilots have to be trained and licensed to fly on every one. So a pilot licensed to fly a Boeing 737 cannot fly a 747 or 757, let alone an Airbus. Having one type simplifies the maintenance; it simplifies everything. We need more of that in hospitals.

But the hospital is also a very complex functional organization. Huge numbers of errors in communication occur for a wide variety of reasons. There's also a very complex social and organizational hierarchy that makes it difficult for people lower on the hierarchy to critique or speak up when they see a problem with someone higher up. These hierarchies have to be overcome. Aviation has had a tremendous effort to encourage junior pilots to speak up and critique the senior pilots. Even when they're questioned on something they're doing correctly, you're supposed to say, "I'm glad you asked that. It's important to check." We need to have that in a hospital, where nurses can critique physicians and even surgeons—and where the surgeons say "thank you." The other thing is that we learn a lot by mistakes and errors that do not lead to harm. The whole culture of aviation encourages admitting your mistakes, so everybody benefits from it. If an experienced pilot makes this mistake, others have probably made it also. Therefore, we can try to minimize either the impact of making those mistakes, or the occurrence of them. That's difficult in medicine for several reasons. One, it goes against the culture, but second, there are very strong legal issues.

Part of the problem is not only is it a complex social, equipment, and personnel issue, but the standard belief across all professions is that the skilled people don't make mistakes. Therefore, if someone makes a mistake, it's assumed that it was because that person wasn't good enough. And when I make a mistake, it's my own fault, and I should have known better. In some sense, those are bad attitudes, and in some sense they're correct. We have to worry that people don't pay as much attention as they should. But on the other hand, that's a fact of life—people cannot pay attention all the time, and not everybody is as good as everybody else. People work when they're tired, and people work when there are many pressures on them. So all of us in everyday life forget things, make mistakes, or occasionally do the wrong thing. If we admit it, then we could design procedures and systems that minimize that impact. The problem is we tend to deny it. Therefore, if somebody makes a mistake we either cover it up, refuse to admit it, or punish the person, instead of asking, "What gave rise to that mistake?" Then maybe we could change the situation.

RW: How would you try to apply this thinking in a health care organization?

DN: Getting the medical staff to see errors as systems issues would make the biggest difference. If you had everybody working together as a team, other solutions would follow automatically. The hospital or clinic would instinctively think of calling in experts in design and human factors and social interaction, and they would be primed to listen to their advice. While you have those people in, you might rewrite some of the procedures. And you would request better equipment from the medical manufacturers. You might even institute performance standards. Here are the functional standards, and your equipment has to meet them. And your equipment has to have some standard interactions, so that I can take the equipment from several manufacturers and bring them together and it will all work smoothly. But I think it starts with the attitudes of the medical staff.

RW: We're beginning to see rapid acceleration in electronic records and computerized order entry. The early literature was very positive. In the last couple of years, new research has begun to chronicle all the errors that computers can create—all the unforeseen, but probably not unforeseeable, consequences of computerization. What insights can you offer about this process of computerizing something so complex? Are we just going to have to make it through this learning curve and make mistakes as we go along, or is there a way to bypass some of those errors?

DN: The same philosophy has to be used to get computers in health care, or you're apt to make the problems worse. Because it's not about computers, it's about the procedures that are being followed. That's one of the great truths about new devices, new technology, and automation. They do not simplify your task, they change your task. So instead of doing x, you're now doing y. Instead of standing over the patient and monitoring them clinically, you are now sitting in front of the computer terminal typing away. The technology changes the nature of the job. Many problems in instituting these new technologies have to do with the workflow and the way the work is transferred from place to place. The machines require a standardization of data input that hospitals don't have. Physicians are notorious for each having their own idiosyncratic ways of doing things. Automation is not going to succeed without standardization.

We have learned a lot about how automation has been introduced to other industries, to know that it always causes trouble and that it always changes the social nature of the task. Maybe we can avoid this in medicine. By being the last in line to automate, we may be able to take advantage of what we've learned from errors in other industries and also the fact that, today, all machines have chips inside, and before long everything will have a wireless communication device. It should be possible to have a type of technology that allows you to do your primary job of medicine and use the technology to help the communication among all the different workers. Years ago, physicians would never look up symptoms or drug dosages and interactions in front of the patient. Today it's quite common for physicians to turn to a book or a computer and look something up and come back to the patient. It's also common for the patient to come in with all sorts of printouts. Well, that's healthy; I think it encourages a better dialogue. These kinds of conversations, not only between patients and physicians, but with physicians and their supporting staff, will improve medicine and reduce the error rate.

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Related Resources