There’s "The Terminator" school of perceiving artificial intelligence risks, in which we’ll all be killed by our robot overlords. And then there’s one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership.
In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. Then hundreds of AI leaders — including Sam Altman of OpenAI and Demis Hassabis of Alphabet’s DeepMind — sent shockwaves with a statement that warned: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
Despite all the attention paid to the potentially catastrophic dangers, the years since have been marked by "accelerationists” largely drowning out the doomers. Companies and countries have raced toward being the first to achieve superhuman AI, brushing off the early calls to prioritize safety. And it has all left the public very confused.
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.