Article

Drivers and social implications of Artificial Intelligence adoption in healthcare during the COVID-19 pandemic

Darius-Aurel Frank, Christian T. Elbaek, Caroline Kjaer Borsting, Panagiotis Mitkidis, Tobias Otterbring, and Sylvie Borau

Abstract

The COVID-19 pandemic continues to impact people worldwide–steadily depleting scarce resources in healthcare. Medical Artificial Intelligence (AI) promises a much-needed relief but only if the technology gets adopted at scale. The present research investigates people’s intention to adopt medical AI as well as the drivers of this adoption in a representative study of two European countries (Denmark and France, N = 1068) during the initial phase of the COVID-19 pandemic. Results reveal AI aversion; only 1 of 10 individuals choose medical AI over human physicians in a hypothetical triage-phase of COVID-19 pre-hospital entrance. Key predictors of medical AI adoption are people’s trust in medical AI and, to a lesser extent, the trait of open-mindedness. More importantly, our results reveal that mistrust and perceived uniqueness neglect from human physicians, as well as a lack of social belonging significantly increase people’s medical AI adoption. These results suggest that for medical AI to be widely adopted, people may need to express less confidence in human physicians and to even feel disconnected from humanity. We discuss the social implications of these findings and propose that successful medical AI adoption policy should focus on trust building measures–without eroding trust in human physicians.

Published in

Plos One, vol. 16, n. 11 (e0259928), November 2021