March 24, 2024

What the experts think of malign superintelligence

Not much

Skynet is not keen on those pesky humans 

Will you live long enough to be enslaved by super-intelligent artificial intelligence? Oxford philosophy professor Nick Bostrom has often made headlines with predictions that you might.

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” he writes. “We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”

His book Superintelligence: Paths, Dangers, Strategies, was a New York Times bestseller last year, endorsed by celebrities like Tesla boss Elon Musk and Bill Gates.

But what do experts in artificial intelligence think of the philosopher’s predictions?

In MIT Technology Review, Oren Etzioni, a professor of computer science at the University of Washington, surveyed members of the American Association for Artificial Intelligence. They were sceptical. About 25% thought that superintelligence would never happen and 92% thought that it was beyond the foreseeable horizon. Some of their comments were not flattering:

“Way, way, way more than 25 years. Centuries most likely. But not never.”

“We’re competing with millions of years’ evolution of the human brain. We can write single-purpose programs that can compete with humans, and sometimes excel, but the world is not neatly compartmentalized into single-problem questions.”

“Nick Bostrom is a professional scare monger. His Institute’s role is to find existential threats to humanity. He sees them everywhere. I am tempted to refer to him as the ‘Donald Trump’ of AI.”

https://www.bioedge.org/images/2008images/TH_18lrjxkgwhphgjpg.jpg
Creative commons
https://www.bioedge.org/images/2008images/18lrjxkgwhphgjpg.jpg
nick bostrom
super-intelligent ai
transhumanism