The CNIL, France’s privacy agency, has produced an AI action plan that includes generative AI technologies like OpenAI’s ChatGPT. The CNIL’s Artificial Intelligence Service evaluates tech and recommends “privacy-friendly AI systems.”
The regulator wants to build AI “that respects personal data” by auditing and controlling AI systems to “protect people.” Other priorities include understanding how AI systems affect humans and supporting local AI ecosystem innovators who follow the CNIL’s best practices.
“The CNIL wants to establish clear rules protecting European citizens’ personal data to contribute to privacy-friendly AI systems,” it writes.
Tech leaders phone regulators every week to address AI. For example, in a US Senate hearing yesterday, OpenAI CEO Sam Altman suggested licensing and testing.
However, European data protection officials are further behind, with Clearview AI already routinely sanctioned across the bloc for data misuse. In addition, Italy has recently enforced Replika, an AI chatbot.
At the end of March, the Italian DPA publicly intervened in OpenAI’s ChatGPT, prompting the business to release new disclosures and controls for users to limit its data use.
In April 2021, the EU presented a risk-based framework for AI regulation, which parliamentarians are currently negotiating.
“Also make it possible to prepare for the entry into application of the draft European AI Regulation, which is currently under discussion,” the CNIL says of its AI action plan, which could be adopted by the end of the year.
Existing data protection agencies (DPAs) may enforce the AI Act. Therefore, regulators must learn AI to make the scheme work. Given the bloc’s digital rule-making leadership, EU DPAs’ priorities will shape AI’s operating parameters in Europe and possibly beyond.