On Tuesday, top artificial intelligence leaders, including OpenAI CEO Sam Altman, joined academics and professors in increasing the “risk of extinction from AI” and urging politicians to compare it to pandemics and nuclear weapons.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said over 350 signatories in a letter issued by the nonprofit Center for AI Safety (CAIS). They included Altman, DeepMind, Anthropic CEOs, and Microsoft and Google executives.
Two of the three “godfathers of AI”—Geoffrey Hinton and Yoshua Bengio—received the 2018 Turing Award for their work on deep learning, as did professors from Harvard to China’s Tsinghua University.
CAIS criticized Yann LeCun’s employer Meta (META.O), for not signing the letter. The letter coincides with the U.S.-EU Trade and Technology Council conference in Sweden, where legislators are expected to discuss AI regulation.
In April, Elon Musk and a group of AI experts and industry executives first warned of potential societal hazards.
Recent advances in AI have created tools supporters say can be used in applications from medical diagnostics to writing legal briefs. However, this has raised concerns the technology could violate privacy, power misinformation campaigns, and cause “smart machines” to think for themselves.
Hinton told Reuters that AI could be “more urgent” than climate warming. Last week, OpenAI CEO Sam Altman called EU AI, the first AI regulation, overregulation and vowed to quit Europe. Politicians’ criticism swayed him. On Thursday, Ursula von der Leyen will meet Altman.