December 8, 2021

Artificial concerns?

This week Nature published a strident editorial defending ‘legitimate concerns’ about contemporary AI research.

Earlier this year, the American Information Technology and Innovation Foundation (ITIF) awarded their facetious ‘annual Luddite award’ to a lose coalition of AI sceptics, including Tesla CEO Elon Musk and renown physicist Stephen Hawking. The ITIF labelled the likes of Musk and Hawking ‘alarmists’ engaged in and “feverish hand-wringing about a looming artificial intelligence apocalypse”.

Yet the sarcastic gesture did not go down well. This week Nature published a scathing critique of the ITIF’s ‘fanciful futurism’, defending the ‘legitimate concerns’ of Musk and Hawking.

“Machines and robots that outperform humans across the board could self-improve beyond our control — and their interests might not align with ours. This extreme scenario, which cannot be discounted, is what captures most popular attention. But it is misleading to dismiss all concerns as worried about this.”

“Few foresaw that the Internet and other technologies would open the way for mass, and often indiscriminate, surveillance by intelligence and law-enforcement agencies, threatening principles of privacy and the right to dissent. AI could make such surveillance more widespread and more powerful.”

“Many experts worry that AI and robots are now set to replace repetitive but skilled jobs…The spectre of permanent mass unemployment, and increased inequality that hits hardest along lines of class, race and gender, is perhaps all too real.”

Ironically, the risks of AI are already being felt indirectly as universities lose young talent to the corporate sector. 

Artificial concerns about artificial intelligence
Xavier Symons
Creative commons
artificial intelligence
existential risk
robot ethics
robotics