OpenAI’s leadership believes the world needs an international regulatory agency like that overseeing nuclear power quickly since AI is expanding rapidly and poses evident threats. But slowly.
In a blog article, OpenAI founder Sam Altman, President Greg Brockman, and Chief Scientist Ilya Sutskever explain that artificial intelligence innovation is moving too quickly for existing authorities to control.
The innovation, most visible in OpenAI’s ChatGPT conversational robot, is both a threat and an asset. AI won’t manage itself, the post admits:
Key development efforts must coordinate to ensure the safety and smooth integration of superintelligence systems into society.
Superintelligence efforts above a certain capability (or compute) threshold will need an international authority to inspect systems, require audits, test for safety standards, restrict deployment and security, etc.
The IAEA is the UN’s official nuclear power collaboration body, but like other such institutions, it can lack power. This model’s AI-governing body can’t turn off negative actors but can set and track international standards and agreements.
OpenAI highlights that tracking computes power and energy usage for AI research is one of the few objective measurements that can and should be reported and tracked. Like other businesses, AI resources should be managed and audited. The corporation suggested exempting smaller companies to avoid stifling innovation.
Timnit Gebru, a leading AI researcher and critic, told The Guardian today, “Unless there is external pressure to do something different, companies are not just going to self-regulate. Regulation and more than profit are needed.”
OpenAI has visibly embraced the latter, to the dismay of many who hoped it would live up to its name. Still, as a market leader, it calls for governance action beyond hearings like the latest, where Senators line up to give reelection speeches that end in question marks.
The proposal amounts to “maybe we should, like, do something,” but it starts a conversation in the industry and shows support from the world’s largest AI brand and provider. “We don’t yet know how to design such a mechanism,” but public monitoring is vital.
Although the company’s leaders support tapping the brakes, they don’t want to let go of the enormous potential “to improve our societies” (not to mention bottom lines) because bad actors may have their foot on the gas.