Using human factors methods to mitigate bias in artificial intelligence-based clinical decision support.
The potential for bias in artificial intelligence (AI) training data is a well-known problem, but the potential for bias resulting from a poorly designed user interface (UI) is less studied. The authors use their experience developing a machine learning-based clinical decision support tool to highlight three considerations in designing UIs for AI applications: (1) bias is not just about the algorithm, (2) it is possible to identify bias and interpretation errors before an application is released, and (3) risk communication strategies can influence bias in unexpected ways.