Sorry, you need to enable JavaScript to visit this website.
Skip to main content

The Dropped "No"

Johnson AJ. The Dropped "No". PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2011.

Save
Print
Cite
Citation

Johnson AJ. The Dropped "No". PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2011.

Annette J. Johnson, MD, MS | October 1, 2011
View more articles from the same authors.

The Case

A 62-year-old man with a history of cirrhosis was admitted with increasing abdominal girth and swelling in his legs. Because the leg swelling was somewhat more pronounced in his right leg, the team ordered an ultrasound to rule out a deep venous thrombosis (DVT) or blood clot. The ultrasound showed no DVT—this finding was communicated verbally to the primary team. However, on dictation, the first word "No" was obscured by the dictation system click that occurs when the speaker initiates recording. As a result, the truncated report read, "DVT is seen…" rather than "No DVT is seen…" Based on the verbal communication with the radiologist, the primary team proceeded under the (correct) understanding that the ultrasound had been negative.

Unfortunately, when the patient developed a heart arrhythmia (atrial fibrillation) two nights later (a Saturday), the night float resident looked at the (incorrect) report. Believing that the patient had a DVT, the resident appropriately worried that part of the blood clot had broken off, travelled to the lung, and caused a pulmonary embolus. When the primary team returned in the morning, the night float alerted them to this read. The primary team's resident paged radiology and spoke with the on-call radiologist (a resident) who was at another site and, therefore, did not have access to the image. The on-call resident was able to pull up the report—which appeared to indicate that there was a DVT—and reassured the primary team that the original reader was one of the best.

Because the patient was a poor candidate for anticoagulation, a filter was placed in the inferior vena cava (IVC)—a major blood vessel that transports blood to the lung. Concerned that a read had changed (from the verbal sign-out the radiology attending had given the resident to the official read) without a call to the team, the attending on the primary team filed an incident report. When the radiologist was contacted about the incident report, she remembered the patient and that he did not have a DVT. She was able to listen to the dictation and hear the click that obscured the "No" at the beginning of the report. Once the mistake was discovered, the IVC filter was removed (about 2 days after it was placed). Fortunately, the patient tolerated the procedures well.

The Commentary

This complex case involves a number of different types of errors, but this commentary will focus on errors associated with radiology reporting processes, especially those resulting from the physician–information technology (IT) interface.

It is worth highlighting the fact that the radiologist rightly called the report to the treatment team at the time of interpretation. Such timely and effective communication of urgent findings is the recommended practice.(1,2) Errors relating to transcription and dictating systems include missed words, incorrect words, and nonsense words. The primary error in this case was that the word "no" was obscured by the dictation system click that occurs when the microphone is turned on—and that this word omission was not caught and corrected by the dictating radiologist. Studies have shown that 30%–42% of finalized radiology reports contain such dictation-related errors.(3,4)

As in this case where the patient underwent an unnecessary IVC filter placement, these types of errors can have substantial impact on outcomes. They can lead to unnecessary invasive procedures, failure to administer needed treatments, or surgery on the wrong side or at the wrong level. Losing a word as short as "no" is very easy to do and can occur with either a traditional human transcriptionist system or voice recognition system (VRS). Such word omissions or duplications have been seen to occur more frequently with VRS compared with transcriptionist systems.(5-7) For example, one study found that about 4.8% of VRS reports contained non-trivial errors (most of which were judged to affect the understanding of the report) compared with 2.1% of transcriptionist reports.(5)

There are several strategies to reduce such reporting errors. Since multiple studies have shown that error rates are significantly greater (2x–10x greater) with VRS compared with transcriptionist-based systems (3-5,8), one strategy to reduce errors is to use human transcriptionists. Despite VRS being somewhat more error-prone, care providers likely will continue to switch to VRS, in part, because the finalized written report is available much sooner (hours versus days).(4,6) Some have suggested that VRS is cost-saving (i.e., by eliminating transcriptionists' salaries) (8), while others have suggested that the potential gain in revenue is actually a loss once the decreased productivity of radiologists is accounted for.(4)

Multiple investigations have suggested that the main cause for the increased error rate with VRS is breakdown in the proofreading process.(3,6,7,9) With VRS, the clerical burden of proofreading reports for errors has been transferred from transcriptionists to radiologists, with radiologists proving less apt to find errors. Proofreading of reports is surprisingly difficult and time-consuming, and radiologists' productivity measures are typically based on number of reports signed or work RVUs [relative value units] and do not factor in the number of errors. A quality improvement or incentive system that rewards radiologists for fewer such errors might result in overall better proofreading by radiologists. Just as some hospitals offer radiologists small bonuses for short report turn-around times, hospitals could offer similar bonuses for low transcription-related error rates in reports (perhaps having peers do random report reviews to estimate error rates). Alternatively, processes whereby a dedicated transcriptionist could review reports (even those created via voice recognition) for errors in real-time could be helpful, though expensive. Another strategy may be to place greater emphasis on training radiologists in dictation techniques and using standardized language.(10)

Double-reading is another system change that can prevent errors.(11) In the reference case, when the primary team recognized the discrepancy between the called report and the finalized report, they should have had the images re-reviewed rather than accepting reassurance from the on-call resident that the dictating radiologist had the necessary expertise. The crucial issue—and point of discrepancy relevant to the clinical management decision—was what the images actually showed. If the on-call resident could not access the images, then the request should have gone up the chain-of-command to someone who could re-interpret the images.

There are several ways to change the clinician–IT interface to reduce errors. Use of report macros/templates (see Figure) and experience with VRS software enable users to shorten the dictation process and reduce the likelihood of errors.(6-8) So radiologists who wish to decrease errors may employ report templates and/or create shorter reports. However, use of templates does not preclude errors related to poor proofreading (5,8), as those of us who use them daily can attest.

VRS tend to have fewer errors when the entire report is dictated as a whole rather than being dictated during the interpretation process.(6,8) Though this may suggest itself as another strategy to reduce errors, it is likely that a minority of radiologists dictate in this way. Most radiologists do not look at a study, decide what the interpretation is, and then dictate the entire report without pausing or turning off the microphone. Very often they begin dictating while interpreting the images and dictate in a start-and-stop fashion, in essence shifting back and forth between image review and dictation. Given the cognitive and efficiency disadvantages of systems that require radiologists to look away from images in order to visually interact with dictation software, some authors have suggested that future VRS should be entirely speech activated, with proofreading done in batch-mode at a later time.(12) However, such systems are not yet widely available.

Two potentially advantageous changes involve new technology still under development. The first is structured reporting systems (SRS). A true SRS involves standardized dictation choices chosen from preset menus. Conceptually, it seems intuitive that use of a SRS would be less likely to result in reports with dropped words or similar errors. For example, if the radiologist in the reference case had to choose phrases from a drop down menu list, such as "DVT is present" or "no DVT is present," it seems that the likelihood of him/her choosing the first when they meant the latter would be less than the likelihood of a short word such as "no" being dropped in a VRS. However, literature to support this concept is hardly convincing. One study from 2001 suggested that "transcription errors" decreased with their SRS, but the authors acknowledged that they did not look at error rates related to radiologists mis-selecting menu items.(13) A prospective study from different investigators found no improvements in accuracy with SRS versus VRS.(14)

A second strategy under development is to use natural language processing systems or other automated software to try to find errors within reports after dictation. There is evidence for decreasing error rates through such systems, but creators of one tested system acknowledge that it would not yet solve the dropped "no" problem.(15)

Take-Home Points

This patient's experience illustrates several key points about errors associated with radiology reporting processes:

  • When discrepancies arise between verbal and finalized reports, double-reading of the imaging study itself is warranted.
  • Though there are substantial gains in timeliness and report availability, VRS have greater error rates than human transcriptionists.
  • Use of report templates with VRS may help lower error rates.
  • Since the primary cause for errors with VRS is breakdown in the proofreading process, a QI/incentive system rewarding low error rates might result in fewer errors.
  • Several IT innovations under development hold promise to decrease errors: entirely voice activated VRS, SRS, and error detection through natural language processing.

Annette J. Johnson, MD, MS Associate Professor

Department of Radiology

Wake Forest University School of Medicine

References

1. Brenner RJ, Lucey LL, Smith JJ, Saunders R. Radiology and medical malpractice claims: a report on the practice standards claims survey of the Physician Insurers Association of America and the American College of Radiology. AJR Am J Roentgenol. 1998;171:19-22. [go to PubMed]

2. ACR Practice Guideline for Communication of Diagnostic Imaging Findings. Reston, VA: American College of Radiology; 2010. [Available at]

3. Pezzullo JA, Tung GA, Rogg JM, Davis LM, Brody JM, Mayo-Smith WW. Voice recognition dictation: radiologist as transcriptionist. J Digit Imaging. 2008;21:384-389. [go to PubMed]

4. Strahan RH, Schneider-Kolsky ME. Voice recognition versus transcriptionist: error rates and productivity in MRI reporting. J Med Imaging Radiat Oncol. 2010;54:411-414. [go to PubMed]

5. McGurk S, Brauer K, Macfarlane TV, Duncan KA. The effect of voice recognition software on comparative error rates in radiology reports. Br J Radiol. 2008;81:767-770. [go to PubMed]

6. Ramaswamy MR, Chaljub G, Esch O, Fanning DD, vanSonnenberg E. Continuous speech recognition in MR imaging reporting: advantages, disadvantages, and impact. AJR Am J Roentgenol. 2000;174:617-622. [go to PubMed]

7. White KS. Speech recognition implementation in radiology. Pediatr Radiol. 2005;35:841-846. [go to PubMed]

8. Rana DS, Hurst G, Shepstone L, Pilling J, Cockburn J, Crawford M. Voice recognition for radiology reporting: is it good enough? Clin Radiol. 2005;60:1205-1212. [go to PubMed]

9. Quint LE, Quint DJ, Myles JD. Frequency and spectrum of errors in final radiology reports generated with automatic speech recognition technology. J Am Coll Radiol. 2008;5:1196-1199. [go to PubMed]

10. Bosmans JML, Weyler JJ, De Schepper AM, Parizel PM. The radiology report as seen by radiologists and referring clinicians: results of the COVER and ROVER surveys. Radiology. 2011;259:184-195. [go to PubMed]

11. Goddard P, Leslie A, Jones A, Wakeley C, Kabala J. Error in radiology. Br J Radiol. 2001;74:949-951. [go to PubMed]

12. Sistrom CL. Conceptual approach for the design of radiology reporting interfaces: the talking template. J Digit Imaging. 2005;18:176-187. [go to PubMed]

13. Berman GD, Gray RN, Liu D, Tyhurst JJ. Structured radiology reporting: a 4-year case study of 160,000 reports. Paper presented at: Integrating the Healthcare Enterprise (IHE) Symposium of the Radiological Society of North America (RSNA) 2001 Annual Meeting; November 25–30, 2001. [Available at]

14. Johnson AJ, Chen MY, Swan JS, Applegate KE, Littenberg B. Cohort study of structured reporting compared with conventional dictation. Radiology. 2009;253:74-80. [go to PubMed]

15. Voll K, Atkins S, Forster B. Improving the utility of speech recognition through error detection. J Digit Imaging. 2008;21:371-377. [go to PubMed]

Figure

Figure. Typical VRS template with microphone button advance through bracketed text and ability to dictate replacement text as appropriate.


 

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Cite
Citation

Johnson AJ. The Dropped "No". PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2011.

Related Resources