Connect with us

Hi, what are you looking for?


Italy bans ChatGPT for privacy concerns.

Photo: ChatGPT

Two days after an open letter called for a moratorium on more powerful generative A.I. models so regulators can catch up with ChatGPT, Italy’s data protection authority ordered OpenAI to stop processing people’s data locally immediately, reminding us that some countries do have laws that apply to cutting-edge A.I.

The Italian DPA worries ChatGPT’s creator is violating the E.U.’s GDPR (GDPR).

The Guarantee claimed it blocked ChatGPT due to worries OpenAI inappropriately handled people’s data and the lack of a means to prevent minors from accessing the software.

The San Francisco-based corporation has 20 days to comply or face hefty penalties. (Reminder: E.U. data protection fines can reach 4% of yearly sales or €20M, whichever is bigger.)

As OpenAI has no E.U. legal body, any data protection authority can act under the GDPR if it identifies concerns for local consumers. Italy may lead.

Processing E.U. users’ data is subject to GDPR. As OpenAI’s huge language model can create biographies of identified persons in the region on demand (we tried it), it’s evident it’s been processing this type of data. OpenAI has not disclosed the training data for GPT-4. Yet, prior models were trained on Internet data, including Reddit. If you’re online enough, the bot probably knows your name.

ChatGPT has also been found to lie about identified people, allegedly making up details its training data lacks. As Europeans have data rights, including the ability to correct errors, this may create GDPR problems. In one case, it’s unclear how/if OpenAI can address the bot’s misstatements about humans.

The Guarantees statement also mentions OpenAI’s data breach earlier this month, when a conversation history feature leaked users’ conversations and may have revealed payment information.

The GDPR also controls data breaches to ensure companies that process personal data are protecting it. The pan-EU rule also requires prompt notification of severe violations to supervisory bodies.

What legal basis did OpenAI use to process Europeans’ data? This processing’s legality.

The GDPR allows for consent, public interest, and data minimization. Still, the scale of processing to train these large language models complicates the legality question, as the Guarantee notes (pointing to the “mass collection and storage of personal data”). The regulation also requires transparency and fairness. At least, ChatGPT, a for-profit startup, appears not to have told people whose data it used to train its commercial AIs. So it may have a sticky issue.

DPAs throughout Europe might compel OpenAI to remove Europeans’ data if it handled it illegally. However, whether that would require it to retrain models based on illegal data is unclear as an existing law grapples with cutting-edge innovation.

Italy may have accidentally outlawed machine learning.

“[T]he Privacy Guarantor notes the lack of information to users and all interested parties whose data is collected by OpenAI but above all the absence of a legal basis that justifies the mass collection and storage of personal data, for the purpose of ‘training’ the algorithms underlying the operation of the platform,” the DPA writes today [translated from Italian using A.I.].

“As indicated by the checks, ChatGPT’s information does not always correlate to the real data, thereby indicating an incorrect handling of personal data,” it stated.

The authorities are concerned about the firm processing their data because OpenAI does not utilize age verification technologies to prevent children from signing up for the chatbot.

The agency recently banned Replika, a virtual friendship A.I. chatbot, citing kid safety concerns. In recent years, it has also pursued TikTok over underage usage, compelling the business to delete almost half a million accounts it could not verify.

If OpenAI can’t validate the age of Italian users, it may have to erase their accounts and start again with a more rigorous sign-up process.

The Guarantee asked OpenAI to respond.

“What’s remarkable is that it more or less copy-pasted Replika in the emphasis on access by youngsters to unsuitable content,” said Newcastle University data protection and Internet law specialist Lilian Edwards. Denial of lawful basis—which should apply to all or at least many machine learning systems, not only generative A.I.—is the actual time bomb.

She cited the pivotal “right to be forgotten” case involving Google search, where an individual in Spain challenged its consentless processing of personal data. European courts established a right for individuals to ask search engines to remove inaccurate or outdated information about them (balanced against a public interest test). Still, E.U. courts did not stop Google’s Internet search processing of personal data because Google granted E.U. data subjects deletion and correction rights.

“Large language models don’t give such treatments and it’s not totally obvious they would, could or what the repercussions would be,” Edwards said, advocating mandatory model retention.

ChatGPT may have violated data protection laws.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


The future of technological innovation is here. Be the first to discover the latest advancements, insights, and reviews. Join us in shaping the future.

You May Also Like