AI Compliance Risks Grow as OpenAI Faces GDPR Complaint

3,422
AI Compliance Risks Grow as OpenAI Faces GDPR Complaint
[Source: Unsplash]
Article by Andrea Surnit
|

Key Takeaways:

  • OpenAI is facing a GDPR complaint over ChatGPT generating false and defamatory information, raising serious legal and ethical concerns.
  • Privacy advocacy group Noyb argues that AI-generated misinformation violates GDPR, which requires accurate personal data and provides individuals with rights to rectification.
  • This case could lead to significant regulatory action, including fines and stricter AI governance measures.

Can businesses trust AI when even top models like ChatGPT are accused of spreading harmful misinformation?

OpenAI is under scrutiny as privacy advocacy group Noyb filed a complaint with the Norwegian Data Protection Authority over ChatGPT’s generation of false personal information.

The case raises concerns about AI misinformation and the European Union’s General Data Protection Regulation (GDPR) laws, which mandate data accuracy.

The complaint stems from an incident where ChatGPT falsely claimed Norwegian citizen Arve Hjalmar Holmen was convicted of murder.

While some details were correct, the fabricated allegations were damaging.

Noyb argues that such errors violate GDPR, which grants individuals the right to correct or delete inaccurate personal data.

Joakim Söderberg, a data protection lawyer at Noyb, indicates that personal data has to be accurate.

"You can’t just spread false information and then add a small disclaimer saying that everything may not be true."

GDPR breaches can lead to fines of up to 4% of a company’s global revenue.

OpenAI’s past GDPR issues, including a €15 million fine in Italy for improper data processing, indicate that European regulators are closely monitoring AI privacy concerns.

Explore The Top Reputation Management Companies
Agency description goes here
Agency description goes here
Agency description goes here
Sponsored i Agencies shown here include sponsored placements.

This case could prompt stricter AI regulations, forcing OpenAI and other AI firms to improve data accuracy.

In fact, similar interventions have led to changes in how ChatGPT handles user data and transparency features.

Business and AI Governance Challenges

AI models like ChatGPT rely on probabilistic text generation, making them prone to “hallucinations” where they generate incorrect information.

While OpenAI has introduced safeguards, this case highlights the limitations of current AI accuracy controls.

Noyb has cited other false allegations made by ChatGPT, including individuals wrongly linked to corruption and child abuse.

These errors pose reputational risks and raise questions about AI’s reliability in business and legal contexts.

The growing scrutiny over AI-generated misinformation highlights the need for businesses to implement robust data governance and compliance measures when deploying AI tools.

Companies investing in AI should prioritize accuracy and transparency to mitigate reputational risks and regulatory penalties.

👍👎💗🤯
Latest AI News
Receive our NewsletterJoin over 70,000 B2B decision-makers growing their brands