Connect with us

Hi, what are you looking for?


Musk and top AI experts urge a halt to “large AI trials.”

Photo: Elon Musk

The “deep hazards to society and mankind” posed by large-scale AI systems have prompted several prominent AI researchers, including Elon Musk, to sign an open letter urging AI laboratories worldwide to halt development.

The nonprofit Future of Life Institute said that AI laboratories are in an “out-of-control race” to build and deploy machine learning systems “that no one — not even their inventors — can comprehend, anticipate, or reliably control.”

“We urge all AI laboratories to halt training of AI systems more capable than GPT-4 for at least six months.”

The letter urges AI laboratories to halt training AI systems more potent than GPT-4 for six months. “This pause should be public, verifiable, and involve all essential participants. If a halt cannot be implemented immediately, governments should impose a moratorium.”

Author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, and AI researchers and CEOs Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque signed. The entire list of signatories is here. However, new names should be taken with caution as there are allegations of humorous additions (e.g., OpenAI CEO Sam Altman, who is partly responsible for the current race dynamic in AI).

The letter is unlikely to change AI development, where major corporations like Google and Microsoft hurry to launch new products, sometimes ignoring safety and moral issues. Yet it shows rising hostility to the “ship it now and repair it later” strategy, which might reach politicians.

According to the letter, OpenAI has suggested an “independent evaluation” of future AI systems to ensure safety. Signatories say it’s time.

“AI laboratories and independent experts should use this pause to collaboratively establish and execute a set of common safety guidelines for advanced AI design and development that are carefully verified and monitored by independent outside experts,” they write. These methods should guarantee system safety.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


The future of technological innovation is here. Be the first to discover the latest advancements, insights, and reviews. Join us in shaping the future.

You May Also Like