ChatGPT under threat from European regulators

Concerns about GDPR compliance might extend to other AI solutions too.

On Friday, Italian regulators imposed a ban on generative AI tool ChatGPT with immediate effect while giving its creator, OpenAI, 20 days to address concerns about the way data is collected and processed under penalty of a fine of $ 21.7m or up to 4% of annual revenues (whichever is greater).

There have been indications that other European regulators may swiftly follow suit. Reports suggest that France is conducting its own inquiry; Ireland has asked Italy for more details about the basis for the ban; and the German data commissioner has said that the same action could “in principle” be taken in Germany.

Why we care. Given the immense excitement created by the availability of ChatGPT and similar tools, it was perhaps too easy to overlook warnings emerging from the legal profession over the last few months that it could run afoul of European data regulations — regulations which, in many ways, have become a de facto global standard.

If the questions that arise need to work their way through the European legal system for adjudication, that could take some time, of course. But it’s clear that regulators in European nations can take swift action in the meantime.


Lawful bases for processing data. One fundamental challenge for large language models like ChatGPT is that under European law, specifically the GDPR, there are only six lawful bases for processing personal data at all (data that can be used directly to identify an individual or indirectly to identify an individual in combination with other information). The bases are:

  • Consent.
  • Performance of a contract.
  • A legitimate interest.
  • A vital interest (a matter of life and death).
  • A legal requirement.
  • A public interest.

To the extent a large language model is being trained on data obtained without explicit consent, it’s by no means clear that any of these bases are applicable — unless, perhaps, one makes the bold assumption that the availability of AI solutions is in the public interest.

Data erasure. Another challenge is whether a solution by ChatGPT is competent to support the “right to be forgotten.” Under GDPR, in certain circumstances, an individual can request the erasure of their data. To be clear, ChatGPT is not scraping the web and heedlessly collecting large quantities of personal data. But it is being trained on very large sets of texts, and the question OpenAI might have to address is whether it knows what’s in those sets in terms of personally identifying information or data it might be asked to erase.


The post ChatGPT under threat from European regulators appeared first on MarTech.


About the author

Kim Davis is the Editorial Director of MarTech. Born in London, but a New Yorker for over two decades, Kim started covering enterprise software ten years ago. His experience encompasses SaaS for the enterprise, digital- ad data-driven urban planning, and applications of SaaS, digital technology, and data in the marketing space. He first wrote about marketing technology as editor of Haymarket’s The Hub, a dedicated marketing tech website, which subsequently became a channel on the established direct marketing brand DMN. Kim joined DMN proper in 2016, as a senior editor, becoming Executive Editor, then Editor-in-Chief a position he held until January 2020. Prior to working in tech journalism, Kim was Associate Editor at a New York Times hyper-local news site, The Local: East Village, and has previously worked as an editor of an academic publication, and as a music journalist. He has written hundreds of New York restaurant reviews for a personal blog, and has been an occasional guest contributor to Eater.