Health analysts have mixed opinions.
One of the biggest developments in medicine in recent years has been the application of AI technology to healthcare practice.
Increasingly, lower-order activities that were once carried out by doctors — such as basic diagnostic tests and electronic record keeping — are now performed by AI-based computer technologies. AI technology is often more accurate and precise than human doctors, and also provides cash-strapped healthcare providers with an opportunity to save on operational costs.
But what impact might AI technology have on the relationship between doctors and their patients?
Some tech commentators are very optimistic, arguing that AI will free up doctors to focus on more important tasks like providing emotional and psychological support for patients. In his most recent book Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, Eric Topol (Scripps Research Institute) argues that AI technology will allow physicians to devote more of their time to face-to-face patient care. As AI becomes further integrated into the everyday practice of medicine, Topol argues that we have “the opportunity to restore the precious and time-honored connection and trust—the human touch—between patients and doctors”. In addition, AI technology could also revolutionise the way we select and train doctors, and even the manner in which doctors and other healthcare professionals collaborate.
Other commentators, however, are more circumspect. In a recent review of Topol’s book — published in in the Hastings Center Report — Rob Sparrow and Joshua Hatherley from Monash University argue that AI technology may in fact erode rather than enhance the therapeutic relationship. As AI becomes more of a presence in the early stages of treatment (such as diagnosis), patients may become less trusting of doctors.
“If doctors start to rely on advice from AI, the question will arise whether we should—indeed, how we could—trust our doctors… If we don’t believe that it is our physicians who are really making the decisions about our health care, then it’s hard to see how we could feel that they are caring for us. They might care about us, but that’s not the same as caring for us.”
Sparrow and Hatherley also fear that AI may, ironically, disenfranchise doctors in their workplaces. They write that AI would “likely to demoralize, fragment, and disempower the medical profession” rather than allowing doctors “to rise up and demand better working conditions and better outcomes for their patients”.
“Even if AI is unlikely to replace physicians entirely, it is likely to render redundant skills that the current generation of physicians spent years learning and have placed at the heart of their professional self-conception…More generally, as with previous generations of information and computing technology, the introduction of AI into hospitals and health care settings is likely to lead to a shift in power and authority away from frontline practitioners to those who manage and design the IT systems…Physicians who are demoralized, disempowered, concerned for their jobs, and feel themselves to be under surveillance are ill placed to win political victories.”
Will AI make healthcare more human?
- Can machines be moral? - March 7, 2021
- Can we synthesise Christianity moral theology with secular bioethics? - November 28, 2020
- Euthanasia polling data may fail to capture people’s considered views - August 15, 2020