Sorry, you need to enable JavaScript to visit this website.
Skip to main content
Study
Emerging Classic

Analysis of errors in dictated clinical documents assisted by speech recognition software and professional transcriptionists.

Zhou L; Blackley SV; Kowalski L; Doan R; Acker WW; Landman AB; Kontrient E; Mack D; Meteer M; Bates DW; Goss FR.

Save
Print
July 25, 2018
Zhou L, Blackley SV, Kowalski L, et al. JAMA Netw Open. 2018;1(3):e180530.
View more articles from the same authors.

Clinical documentation is an essential part of patient care. However, in the electronic health record era, documentation is widely perceived to be inefficient and a significant driver of physician burnout. Speech recognition software, which directly transcribes clinicians' dictated speech, is increasingly being used in order to streamline the documentation workflow. This study examined the accuracy of speech recognition software in a sample of notes (progress notes, operative notes, and discharge summaries) produced by 144 clinicians of multiple disciplines in 2 health systems. Transcripts produced by speech recognition software had 7.4 errors per 100 transcribed words, with many of these errors being potentially clinically significant. Although review by a professional medical transcriptionist corrected most of these errors, about 1 in 300 words remained incorrect even in the final physician-signed note. This study corroborates prior research that found potentially significant error rates in software-transcribed emergency medicine and radiology notes. A WebM&M commentary discussed an adverse event that occurred due to a transcription error in a radiology study report.

Save
Print
Cite
Citation

Zhou L; Blackley SV; Kowalski L; Doan R; Acker WW; Landman AB; Kontrient E; Mack D; Meteer M; Bates DW; Goss FR.

Related Resources From the Same Author(s)
Related Resources