A third of AI researchers believe AI could have ‘catastrophic’ consequences comparable to this century’s nuclear war

A survey of scientists and researchers working in the field of artificial intelligence (AI) found that about a third believe it could cause a disaster comparable to all-out nuclear war.

The survey was given to researchers who had co-authored at least two calculations linguistic publications between 2019 and 2022. It aimed to uncover industry views on controversial topics surrounding AI and artificial general intelligence (AGI) – the ability of an AI to think like a human – as well as the impact that people in the research field believe AI will have on society as a whole. The results are published in a preprint article that has not yet been peer-reviewed.

AGI, as the paper notes, is a controversial topic in the field. There are big differences of opinion on whether we are moving towards this, whether this is something we should be aiming for at all, and what will happen when humanity gets there.

“The community at large knows this is a controversial issue, and now (thanks to this survey) we can know that we know this is controversial,” the team wrote in their research. Among the (quite mixed) results, 58% of respondents agreed that AGI should be an important concern for natural language processing, while 57% agreed that recent research had led us towards AGI.

Where it gets interesting is how AI researchers think the AGI will affect the world as a whole.

“73% of respondents agree that AI-driven work automation could conceivably lead to revolutionary societal change this century, at least on the scale of the Industrial Revolution,” the researchers said. writing of their investigation.

Meanwhile, 36% of non-trivial respondents agreed that it is plausible that AI could produce catastrophic results this century, “on the level of all-out nuclear war”.

It’s not the most reassuring thing when a significant portion of a field believes this could lead to the destruction of humanity. However, in the comments section, some respondents objected to the wording of “total nuclear war”, writing that they “would be okay with less extreme formulations of the question”.

“This suggests that our result of 36% is an underestimate of respondents who are seriously concerned about the negative impacts of AI systems,” the team wrote.

However (maybe with good reason) wary of the potential catastrophic consequences of AGI, the researchers overwhelmingly agreed that natural language processing has “a positive overall impact on the world, both to date (89%) and in the future (87%).”

“Although opinions are anti-correlated, a substantial minority of 23% of respondents agreed with Q6-2 [that AGI could be catastrophic on par with an all-out nuclear war] and Q3-4 [that NLP has an overall positive impact on the world]wrote the researchers, “suggesting that they might believe that the potential for positive impact of NLP is so great that it even outweighs the plausible threats to civilization.”

Among other findings, 74% of AI researchers think the private sector influences the field too heavily, and 60% think the carbon footprint of training large models should be a major concern for NLP researchers.

The article is published on arXiv preprint server

James G. Williams