While the Italian authority hasn’t yet said which of the previously suspected ChatGPT breaches it’s confirmed at this stage, the legal basis OpenAI claims for processing personal data to train its AI models looks like a particularly crux issue.
This is because ChatGPT was developed using masses of data scraped off the public Internet — information which includes the personal data of individuals. And the problem OpenAI faces in the European Union is that processing EU people’s data requires it to have a valid legal basis.
The GDPR lists six possible legal bases — most of which are just not relevant in its context.
Last April, OpenAI was told by the Garante to remove references to “performance of a contract” for ChatGPT model training — leaving it with just two possibilities: Consent or legitimate interests.
Given the AI giant has never sought to obtain the consent of the countless millions (or even billions) of web users’ whose information it has ingested and processed for AI model building, any attempt to claim it had Europeans’ permission for the processing would seem doomed to fail. And when OpenAI revised its documentation after the Garante’s intervention last year it appeared to be seeking to rely on a claim of legitimate interest. However this legal basis still requires a data processor to allow data subjects to raise an objection — and have processing of their info stop.
How OpenAI could do this in the context of its AI chatbot is an open question. (It might, in theory, require it to withdraw and destroy illegally trained models and retrain new models without the objecting individual’s data in the training pool — but, assuming it could even identify all the unlawfully processed data on a per individual basis, it would need to do that for the data of each and every objecting EU person who told it to stop… Which, er, sounds expensive.)
Beyond that thorny issue, there is the wider question of whether the Garante will finally conclude legitimate interests is even a valid legal basis in this context.