Flora of European Cloud
/

OpenAI faces European privacy complaint after ChatGPT allegedly hallucinated man murdered his sons

TL;DR

  • OpenAI is facing a privacy complaint in Norway over ChatGPT’s false claims about a user
  • The complaint alleges violation of GDPR rules
  • The user, Arve Hjalmar Holmen, was falsely accused of murdering his sons by ChatGPT
  • Privacy group Noyb is advocating for the user’s data to be deleted from the AI’s training set

OpenAI’s ChatGPT has sparked a significant privacy concern following a complaint filed with the Norwegian Data Protection Authority by a user who claims the AI falsely accused him of a heinous crime. The allegations center on a privacy violation that contravenes the strict regulations of the European Union’s General Data Protection Regulation (GDPR).

Background of the Complaint

Arve Hjalmar Holmen, a Norwegian resident, reported that when he queried ChatGPT about himself, the AI provided a fabricated response asserting that he had murdered his sons and was serving a 21-year prison sentence. Shocked by this revelation, Holmen reached out to the privacy advocacy group Noyb to seek redress. Noyb’s subsequent investigation found that ChatGPT not only produced this inaccurate information but also intermixed it with personal details about Holmen’s life, including the number and gender of his children and the name of his hometown.

The complaint contends that this incident constitutes a violation of GDPR standards. Under Article 5(1)(d) of the GDPR, organizations processing personal data must ensure the accuracy of that data. If such data is inaccurate, it must be corrected or deleted. However, Noyb argues that even though ChatGPT has stopped making these claims, there is no certainty that inaccurate data has been erased from its training set.

“The incorrect data may still remain part of the LLM’s dataset… there is no way for the individual to be absolutely sure that this output can be completely erased unless the entire AI model is retrained,” stated a representative from Noyb.

Broader Concerns about AI Hallucinations

This incident raises broader questions about the reliability of AI systems. AI models like ChatGPT are known to occasionally “hallucinate” — that is, generate false information as a result of their probabilistic nature, where they predict text based on training data rather than understanding factual accuracy or context. Such hallucinations can lead to serious reputational harm, particularly when they involve personal accusations.

Future Actions and Implications

Noyb is demanding that the Norwegian Data Protection Authority order OpenAI to permanently delete inaccurate information about Holmen and ensure mechanisms are in place to prevent future occurrences of similar fabrications. As these issues unfold, they could have significant ramifications for how AI companies, including OpenAI, handle user data and the guidelines surrounding AI training methodologies.

Conclusion

The case against OpenAI and ChatGPT illustrates the urgent need for more stringent regulations and oversight in the field of AI technology, particularly concerning its applications in sensitive areas such as personal data processing. As more complaints and concerns arise, OpenAI and similar platforms may need to reevaluate their data-handling practices to adhere to evolving legal standards and safeguard user rights.

References

[^1]: OpenAI faces complaint after ChatGPT alleged man murdered his … – Euronews. Retrieved October 29, 2023.

[^2]: ChatGPT falsely told man he killed his children – BBC. Retrieved October 29, 2023.

[^3]: AI hallucinations: ChatGPT created a fake child murderer – NOYB. Retrieved October 29, 2023.

This article was written with the help of AI.

To top