In:
Journal of Medical Ethics, BMJ, Vol. 48, No. 11 ( 2022-11), p. 852-856
Abstract:
Artificial intelligence (AI) is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a "public trust deficit". This paper argues that a focus on trust as the basis upon which a relationship between this new technology and the public is built is, at best, ineffective, at worst, inappropriate or even dangerous, as it diverts attention from what is actually needed to actively warrant trust. Instead of agonising about how to facilitate trust, a type of relationship which can leave those trusting vulnerable and exposed, we argue that efforts should be focused on the difficult and dynamic process of ensuring reliance underwritten by strong legal and regulatory frameworks. From there, trust could emerge but not merely as a means to an end. Instead, as something to work in practice towards; that is, the deserved result of an ongoing ethical relationship where there is the appropriate, enforceable and reliable regulatory infrastructure in place for problems, challenges and power asymmetries to be continuously accounted for and appropriately redressed.
Type of Medium:
Online Resource
ISSN:
0306-6800
,
1473-4257
DOI:
10.1136/medethics-2020-107095
Language:
English
Publisher:
BMJ
Publication Date:
2022
detail.hit.zdb_id:
2026397-1
SSG:
0
SSG:
1
SSG:
5,1
Bookmarklink