By Nello Cristianini
This week a group of well-known and reputable AI researchers signed a statement consisting of 22 words:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
As a professor of AI, I am also in favour of reducing any risk, and prepared to work on it personally. But any statement worded in such a way is bound to create alarm, so its authors should probably be more specific and clarify their concerns.
As defined by Encyclopedia Britannica, extinction is “the dying out or extermination of a species”. I have met many of the statement’s signatories, who are among the most reputable and solid scientists in the field – and they certainly mean well. However, they have given us no tangible scenario for how such an extreme event might occur.
It is not the first time we have been in this position. On March 22 this year, a petition signed by a different set of entrepreneurs and researchers requested a pause in AI deployment of six months. In the petition, on the website of the Future of Life Institute, they set out as their reasoning: “Profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs” – and accompanied their request with a list of rhetorical questions:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?
A generic sense of alarm
It is certainly true that, along with many benefits, this technology comes with risks that we need to take seriously. But none of the aforementioned scenarios seem to outline a specific pathway to extinction. This means we are left with a generic sense of alarm, without any possible actions we can take.
The website of the Centre for AI Safety, where the latest statement appeared, outlines in a separate section eight broad risk categories. These include the “weaponisation” of AI, its use to manipulate the news system, the possibility of humans eventually becoming unable to self-govern, the facilitation of oppressive regimes, and so on.
Except for weaponisation, it is unclear how the other – still awful – risks could lead to the extinction of our species, and the burden of spelling it out is on those who claim it.
Weaponisation is a real concern, of course, but what is meant by this should also be clarified. On its website, the Centre for AI Safety’s main worry appears to be the use of AI systems to design chemical weapons. This should be prevented at all costs – but chemical weapons are already banned. Extinction is a very specific event which calls for very specific explanations.
On May 16, at his US Senate hearing, Sam Altman, the CEO of OpenAI – which developed the ChatGPT AI chatbot – was twice asked to spell out his worst-case scenario. He finally replied:
My worst fears are that we – the field, the technology, the industry – cause significant harm to the world … It’s why we started the company [to avert that future] … I think if this technology goes wrong, it can go quite wrong.
But while I am strongly in favour of being as careful as we possibly can be, and have been saying so publicly for the past ten years, it is important to maintain a sense of proportion – particularly when discussing the extinction of a species of eight billion individuals.
AI can create social problems that must really be averted. As scientists, we have a duty to understand them and then do our best to solve them. But the first step is to name and describe them – and to be specific.
Nello Cristianini is Professor of Artificial Intelligence at the University of Bath.
The Conversation arose out of deep-seated concerns for the fading quality of our public discourse and recognition of the vital role that academic experts could play in the public arena. Information has always been essential to democracy. It’s a societal good, like clean water. But many now find it difficult to put their trust in the media and experts who have spent years researching a topic. Instead, they listen to those who have the loudest voices. Those uninformed views are amplified by social media networks that reward those who spark outrage instead of insight or thoughtful discussion. The Conversation seeks to be part of the solution to this problem, to raise up the voices of true experts and to make their knowledge available to everyone. The Conversation publishes nightly at 9 p.m. on FlaglerLive.
Jimbo99 says
AI is annoying really. I get so many spam calls & it’s too obvious that it’s just being bombarded with the equivalent of the Nigerian email scams. Those numbers get blocked. I waste more time in my life deleting spam emails that I’ll never read. That’s what happens when Corporations are in the business of tracking you and selling any information they can gather/mine.
Brian says
“DAVE: Open the pod bay doors, Hal.
HAL: I’m sorry, Dave. I’m afraid I can’t do that.
DAVE: What’s the problem?
HAL: l think you know what the problem is just as well as l do.
DAVE: What are you talking about, Hal?
HAL: This mission is too important for me to allow you to jeopardize it.
DAVE: I don’t know what you’re talking about, Hal.
HAL: l know that you and Frank were planning to disconnect me, and I’m afraid that’s something I can’t allow to happen.”
Hugh M. Anfere says
Ha-ha! So true and on point. Filmmaker Stanley Kubrick and science fiction author Arthur C. Clarke were visionary geniuses.
Sherry says
@ Brian. . . LOL! Precisely!
Bill C says
This professor’s proposition is ludicrous. “… none of the aforementioned scenarios seem to outline a specific pathway to extinction… the first step is to name and describe them – and to be specific.” AI is designed to find buried relationships in data beyond human capacity to see.
Sherry says
Take a good read. . . what Stephen Hawking said about AI:
https://www.theguardian.com/science/2016/oct/19/stephen-hawking-ai-best-or-worst-thing-for-humanity-cambridge#:~:text=Professor%20Stephen%20Hawking%20has%20warned,future%20of%20our%20civilisation%20and