Industry leaders compare AI risks with pandemic and nuclear war
Artificial intelligence is a global focus. Also those who have developed AI are now warning about the dangers of using the AI they have developed. Around 350 AI industry leaders have signed a new statement published by Center for AI Safety. Their statement is “meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.” Signatories include representatives for OpenAI, which started the most recent AI race by launching generative chatbot ChatGPT, and Google DeepMind and Microsoft.
And the statement includes strong words referring to risks like extinction, pandemic and nuclear war:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”, the statement says.
The Center for AI Safety itself at its website says : “AI has been compared to electricity and the steam engine in terms of its potential to transform society. The technology could be profoundly beneficial, but it also presents serious risks, due to competitive pressure and other factors.”AI
An earlier statement published by the Center and signed by around 1 500 specialists has, without success, argued for a six month moratorium of AI development to give society time to find a way of controlling negative use of it.
Conflicting statements, scientific benefits versus dangers, have created a discussion also about why those who have developed AI, now call for governments and authorities to regulate how their achievements should be used.
Some agree about the dangers with, for instance, the ability to create more trustworthy looking manipulating deep fakes that can threaten democracy. More cynical arguments have been that those who have developed AI would like strict regulation by governments to limit the possibilities for others to develop their own AI.
University of Cambridge has launched a book “Imagine AI: How the world sees intelligent machines” published by Oxford University Press.
The book is based on research project “Global AI Narratives”, which ran from 2018-2022, and offers a variety on how societies have seen possibilities and risks with intelligent machines – also before they were invented.
The research project aimed to understand and analyse how different cultures and regions perceive the risks and benefits of artificial intelligence, and the influences that are shaping those perceptions.
It was funded by DeepMind Ethics & Society (now part of Google DeepMind), the Templeton World Charity Foundation and the Leverhulme Trust.
The book is structured geographically, with each of its 25 chapters presenting insights into how a specific region or culture imagines intelligent machines. It covers ancient philosophy, contemporary science fiction, and visual art to policy discourse.
Moonshot News is an independent European news website for all IT, Media and Advertising professionals, powered by women and with a focus on driving the narrative for diversity, inclusion and gender equality in the industry.
Our mission is to provide top and unbiased information for all professionals and to make sure that women get their fair share of voice in the news and in the spotlight!
We produce original content, news articles, a curated calendar of industry events and a database of women IT, Media and Advertising associations.