Some regulators use outdated regulations to regulate artificial intelligence services like ChatGPT, which could change society and industry.
The rapid improvements in generative AI technology powering OpenAI’s ChatGPT have raised privacy and safety concerns. The EU is leading the way in creating new AI guidelines that could set the worldwide standard.
The law will take years to enforce.
“In the absence of regulations, governments can only apply existing rules,” said Massimiliano Cimnaghi, a European data governance expert at consultancy BIP.
“If it’s about protecting personal data, they apply data protection laws, if it’s a threat to safety of people, there are regulations that have not been specifically defined for AI, but they are still applicable.”
After Italian regulator Garante shut down ChatGPT, accusing OpenAI of breaking the EU’s GDPR, Europe’s national privacy watchdogs formed a task force to fix it in April.
The U.S. corporation reintroduced ChatGPT after adding age verification and letting European users opt out of AI model training.
A Garante insider told Reuters the agency would examine various generative AI techniques. France and Spain also initiated privacy investigations into OpenAI in April.