I couldn’t help thinking of the trash talk at the weigh-in before a professional heavyweight fight when I read the recent JAMA Network editorial on ChatGPT. No way José are the editors going to allow artificial intelligence to appear as an author on JAMA articles. “Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship,” they write.
The use of ChatGPT presents a serious problem for medical journals, they write:
In this era of pervasive misinformation and mistrust, responsible use of AI language models and transparent reporting of how these tools are used in the creation of information and publication are vital to promote and protect the credibility and integrity of medical research and trust in medical knowledge.
But given the widespread issue of fake journal articles, perhaps we should quote Muhammed Ali to George Foreman: “If you even dream of beating me, you’d better wake up and apologize.” Medical journals – perhaps not JAMA, but lower down the food chain – need to be very alert. Some people will not scruple to use ChatGPT if they can get away with it.
As an example of this, take the Colombian judge who recently used ChatGPT to write a decision about an autistic boy’s medical funding. The result was uncontroversial, but what raised eyebrows was that the judge, Juan Manuel Padilla, had included conversations with ChatGPT in his supporting documents.
He asked: “Is an autistic minor exonerated from paying fees for their therapies?” And the AI bot replied: “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.”
Padilla explained that ChatGPT played only a supporting role and was not replacing human prudence and justice.
A judge in Colombia’s Supreme Court was sympathetic to his colleague’s pioneering work in this area.
“The justice system should make the most of technology as a tool but always while following ethics and taking into account that the administrator of justice is ultimately a human being,” said Octavio Tejeiro. “It must be seen as an instrument that serves the judge to improve his judgment. We cannot allow the tool to become more important than the person.”
Is clinical bioethics ready for ChatGPT?
Footnote: ChatGPT denies that it is an acronym. “No, ‘ChatGPT’ is not an acronym. It is a name composed of ‘Chat’ and ‘GPT’ which stands for Generative Pretrained Transformer.”