What’s the big deal with artificial intelligence?
Stephen Hawking recently wrote an article warning of the dangers of artificial intelligence. Nobody seemed too concerned.
Stephen Hawking recently co-authored an article in the Independent warning of the dangers of AI. AI, Hawking claims, is of grave risk to the human race:
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
Hawking exhorts humanity to pay more attention to this issue, and applauds the work of think tanks focused on addressing massive ‘existential risks’ (problems like AI that could potentially threaten our very existence).
Hawking’s article only received modest attention, despite its syndication in two major publications.
The Atlantic’s James Hamblin suggests people just aren’t concerned about these rather abstract issues.
“I have a hundred thousand things I am concerned about at this exact moment. Do I seriously need to add to that a singularity?”
- Can machines be moral? - March 7, 2021
- Can we synthesise Christianity moral theology with secular bioethics? - November 28, 2020
- Euthanasia polling data may fail to capture people’s considered views - August 15, 2020