Sorry, you need to enable JavaScript to visit this website.
Skip to main content
Study

Patient and consumer safety risks when using conversational assistants for medical information: an observational study of Siri, Alexa, and Google Assistant.

Bickmore TW, Trinh H, Olafsson S, et al. Patient and consumer safety risks when using conversational assistants for medical information: an observational study of Siri, Alexa, and Google Assistant. J Med Internet Res. 2018;20(9):e11510. doi:10.2196/11510.

Save
Print
September 19, 2018
Bickmore TW, Trinh H, Olafsson S, et al. J Med Internet Res. 2018;20(9):e11510.
View more articles from the same authors.

Experts have raised safety concerns for patients seeking medical information over the Internet. This study examined whether a conversational assistant such as Siri, Alexa, or Google Assistant would provide accurate information in response to a medical question. Lay participants queried a randomly assigned conversational assistant about their own health-related question and standardized questions relating to medication use and recognition of symptoms. Conversational assistants were unable to answer the majority of questions. Among the answered questions, a significant proportion of suggested actions (29%) could lead to harm. The authors conclude that conversational assistants are neither safe nor effective in providing actionable medical information.

Save
Print
Cite
Citation

Bickmore TW, Trinh H, Olafsson S, et al. Patient and consumer safety risks when using conversational assistants for medical information: an observational study of Siri, Alexa, and Google Assistant. J Med Internet Res. 2018;20(9):e11510. doi:10.2196/11510.

Related Resources From the Same Author(s)