Connect with us

Hi, what are you looking for?

AI

OpenAI leaders suggest international AI regulation.

Photo: Open AI

OpenAI’s leadership believes the world needs an international regulatory agency like that overseeing nuclear power quickly since AI is expanding rapidly and poses evident threats. But slowly.

In a blog article, OpenAI founder Sam Altman, President Greg Brockman, and Chief Scientist Ilya Sutskever explain that artificial intelligence innovation is moving too quickly for existing authorities to control.

The innovation, most visible in OpenAI’s ChatGPT conversational robot, is both a threat and an asset. AI won’t manage itself, the post admits:

Key development efforts must coordinate to ensure the safety and smooth integration of superintelligence systems into society.

Superintelligence efforts above a certain capability (or compute) threshold will need an international authority to inspect systems, require audits, test for safety standards, restrict deployment and security, etc.

The IAEA is the UN’s official nuclear power collaboration body, but like other such institutions, it can lack power. This model’s AI-governing body can’t turn off negative actors but can set and track international standards and agreements.

OpenAI highlights that tracking computes power and energy usage for AI research is one of the few objective measurements that can and should be reported and tracked. Like other businesses, AI resources should be managed and audited. The corporation suggested exempting smaller companies to avoid stifling innovation.

Timnit Gebru, a leading AI researcher and critic, told The Guardian today, “Unless there is external pressure to do something different, companies are not just going to self-regulate. Regulation and more than profit are needed.”

OpenAI has visibly embraced the latter, to the dismay of many who hoped it would live up to its name. Still, as a market leader, it calls for governance action beyond hearings like the latest, where Senators line up to give reelection speeches that end in question marks.

The proposal amounts to “maybe we should, like, do something,” but it starts a conversation in the industry and shows support from the world’s largest AI brand and provider. “We don’t yet know how to design such a mechanism,” but public monitoring is vital.

Although the company’s leaders support tapping the brakes, they don’t want to let go of the enormous potential “to improve our societies” (not to mention bottom lines) because bad actors may have their foot on the gas.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

The future of technological innovation is here. Be the first to discover the latest advancements, insights, and reviews. Join us in shaping the future.
SUBSCRIBE

You May Also Like

TECH

Antivirus software administration is of the utmost importance in the field of cybersecurity. Even though Kaspersky Ultra Antivirus is well-known for its strong security...

AI

A number of sectors have been profoundly affected by the advent of artificial intelligence (AI), but the creation of AI chatbots stands out among...

TECH

Because of its superior products, Apple has maintained its position as the market leader in smartwatches. The most recent Apple Watch models, the Ultra...

AI

One of the most recent frontrunners in the field of AI-driven coding, Poolside, has made headlines after raising $500 million from prominent investors, including...

SUBSCRIBE

The future of technological innovation is here. Be the first to discover the latest advancements, insights, and reviews. Join us in shaping the future.