UK focuses on transparency and access with new AI principles. Britain outlined principles on Monday that emphasize the need for accountability and transparency to prevent a small number of tech giants from controlling artificial intelligence (AI) models to the disadvantage of consumers and enterprises.
Like other governments worldwide, Britain’s Competition and Markets Authority (CMA) is working to curb some of the potential drawbacks of AI without inhibiting innovation.
Its seven guidelines attempt to control fundamental models like ChatGPT by holding developers responsible, preventing Big Tech from entangling technology in their walled platforms, and ending anti-competitive behavior like bundling.
According to CMA Chief Executive Sarah Cardell, there is a strong chance that technology could boost productivity and simplify millions of daily activities. Still, a bright future cannot be taken for granted.
She warned that there was a chance that a small number of players with market power may dominate the usage of AI, preventing the full advantages from being felt throughout the economy.
To ensure that the creation and application of foundation models evolve in a way that fosters competition and safeguards consumers, she said, “We have today proposed these new principles and launched a broad program of engagement.”
The CMA’s draft guidelines, which arrive six weeks before Britain holds a summit on global AI safety, will serve as the foundation for its AI policy as it obtains new authority to regulate digital markets in the coming months.
The company declared that it would now solicit opinions from top AI developers like Google, Meta, OpenAI, Microsoft, NVIDIA, and Anthropic, as well as from academics, governments, and other regulators.
The proposed principles also include flexibility for enterprises to employ different and diverse business models, including open and closed models.
In March, Britain decided against creating a new regulator in favor of dividing control over AI between the CMA and other organizations that monitor human rights and health and safety.
The United States is considering potential regulations for artificial intelligence. In April, digital ministers from the Group of Seven major countries established “risk-based” legislation protecting an open market.