
What will ChatGPT mean for bioethics?
An expert on medical issues at Harvard Law School, I. Glenn Cohen, has analysed the challenge of AI for bioethics in the American Journal of Bioethics. He acknowledges that he is rather tentative about giving advice, as AI models like ChatGPT and Bard are advancing by the week.
He lists several concerns for medicine – some familiar, some not so familiar.
Who owns the data? The computers are trained on texts. Should the participation of scientists, doctors and patients be acknowledged or compensated?
Is the data representative? “As a white male in my 40s,” writes Cohen, “I am extremely well-represented in the major data sets that train medical AI. This is not so for black women, indigenous populations, people living in rural settings, people from across the globe, and some populations with disabilities.”
False objectivity. In a sense, an internet search engine acknowledges a range of possible answers to questions. “Compared to Google results, ChatGPT produces a single initial answer, not a selection of links, and thus a claim of a single truth.”
Privacy. The privacy risks of using Big Data are well-known. The problems with ChatGPT are not substantially different, but they make it easier for everyone to access that data.
Informed consent. People need to know that they are interacting with AI rather than a real person.
Medical deepfakes. The danger of creating false and deceptive images, videos or articles is well-known. But ChatGPT also “poses new-ish risks about using AI to generate distrust or false beliefs about medicine.”
The doctor-patient relationship. As AI becomes more skilled in “diagnosing” patients’ ailments, it might make patients more suspicious of advice from their own doctor.