Italy’s data protection authority has imposed a €15 million ($15.7 million) fine on OpenAI following an investigation into the company’s flagship AI model, ChatGPT.
The Italian Data Protection Authority (IDPA) revealed that OpenAI failed to report a data breach that occurred in March 2023, which led to the fine. The agency also determined that OpenAI had used personal data to train ChatGPT without properly establishing a legal basis, violating transparency obligations under EU data protection laws.
The IDPA also pointed out that OpenAI lacked sufficient age verification measures, allowing minors to access the platform and potentially encounter content inappropriate for their development.
In response to these findings, the IDPA has ordered OpenAI to launch a six-month public education campaign to raise awareness about how ChatGPT operates, particularly in terms of data collection practices and users’ rights, including the ability to oppose data usage for AI training, as stipulated by the General Data Protection Regulation (GDPR).
OpenAI’s cooperation during the investigation was noted as a factor in the reduced fine, though the company is still facing scrutiny under the GDPR, which allows for fines as high as €20 million or 4% of global revenue for breaches. As part of the investigation, OpenAI moved its European operations to Ireland, with the Irish Data Protection Authority taking over as the primary regulatory body for ongoing matters.