Britain’s data watchdog warned Friday that Snapchat may have failed to adequately analyze privacy threats to youngsters from its artificial intelligence chatbot. It will consider the company’s answer before enforcing it.
According to the Information Commissioner’s Office (ICO), “My A.I.” might be prohibited in the U.K. if the U.S. corporation doesn’t resolve its concerns.
“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My A.I.’,” John Edwards stated.
The agency said the results do not necessarily indicate the youth-focused instant messaging service has violated British data protection regulations or that the ICO will issue an enforcement notice. Snap said it was investigating the ICO’s warning and protecting user privacy.
A Snap spokesman added, “My A.I. went through a robust legal and privacy review process before being made publicly available.” “We will continue to work constructively with the ICO to ensure they’re comfortable with our risk assessment procedures.”
The ICO examines how Snapchat’s “My A.I.” manages personal data from 21 million U.K. users, including 13-17-year-olds. “My A.I.” uses OpenAI’s ChatGPT, the most recognized generative A.I., which regulators worldwide are trying to regulate for privacy and safety.
Snapchat and other social media networks require users to be 13 or older, but they have had mixed success keeping kids off. In August, Reuters reported that the agency was investigating whether Snapchat was removing underage users.