Sorry, you need to enable JavaScript to visit this website.
Skip to main content
Study

An objective framework for evaluating unrecognized bias in medical AI models predicting COVID-19 outcomes.

Estiri H, Strasser ZH, Rashidian S, et al. An objective framework for evaluating unrecognized bias in medical AI models predicting COVID-19 outcomes. J Am Med Inform Assoc. Epub 2022 May 2. doi: 10.1093/jamia/ocac070

Save
Print
June 1, 2022
Estiri H, Strasser ZH, Rashidian S, et al. J Am Med Inform Assoc. 2022;29(8):1334–1341.
View more articles from the same authors.

While artificial intelligence (AI) in healthcare may potentially improve some areas of patient care, its overall safety depends, in part, on the algorithms used to train it. One hospital developed four AI models at the start of the COVID-19 pandemic to predict risks such as hospitalization or ICU admission. Researchers found inconsistent instances of model-level bias and recommend a holistic approach to search for unrecognized bias in health AI.

Save
Print
Cite
Citation

Estiri H, Strasser ZH, Rashidian S, et al. An objective framework for evaluating unrecognized bias in medical AI models predicting COVID-19 outcomes. J Am Med Inform Assoc. Epub 2022 May 2. doi: 10.1093/jamia/ocac070

Related Resources From the Same Author(s)
Related Resources