OpenAI's very popular ChatGPT chatbot regularly posts false information about people without offering the option to correct it. In many cases, these so-called "hallucinations" can seriously damage a person's reputation: in the past, ChatGPT has falsely accused people of corruption, child abuse - or even murder. The latter case involved a Norwegian user. When he tried to find out if the chatbot had any information about him, ChatGPT confidently made up a fake story that portrayed him as a convicted murderer. This is clearly not an isolated case. NOYB has therefore filed a second complaint against OpenAI. By knowingly allowing ChatGPT to create defamatory results, the company is clearly violating the GDPR's data accuracy principle.
Artificial intelligence hallucinations
From innocent mistakes to slanderous lies. The rapid rise of AI chatbots like ChatGPT has been accompanied by critical voices warning people that they can never be sure that the outputs are factually correct. This is because these AI systems merely predict the next most likely word in response to a prompt. As a result, AI systems regularly hallucinate. That is, they simply make up stories. While this can be quite harmless or even amusing in some cases, it can also have disastrous consequences for human lives. There have been several media reports of fabricated sexual harassment scandals, false bribery allegations and alleged child molestation - which have already resulted in lawsuits against OpenAI. OpenAI has responded by having a small disclaimer saying it may produce false results.
Joakim Söderberg, data protection lawyer at NOYB: "The GDPR is clear. Personal data must be accurate. And if they are not, users have the right to have them changed to reflect the truth. Showing ChatGPT users a minor warning that a chatbot can make mistakes is clearly not enough. You can't just spread false information and add a little disclaimer at the end that everything you said just might not be true..."
ChatGPT created a fake murderer and prison
Unfortunately, these incidents are not a thing of the past. When a Norwegian user wanted to Arve Hjalmar Holmen to see if ChatGPT had any information about him, he was confronted with a fabricated horror story: ChatGPT presented the complainant as a convicted felon who had murdered two of his children and attempted to murder his third son. To make matters worse, the fake story contained real elements from his personal life. Among them were the actual number and gender of his children and the name of his hometown. In addition, ChatGPT also claimed that the user had been sentenced to 21 years in prison. Given the combination of clearly identifiable personal data and false information, this is undoubtedly a violation of the GDPR. Under Article 5(1)(d), companies must ensure that the personal data they create about individuals is accurate.

Arve Hjalmar Holmen, the complainant: "Some people think that 'there is no smoke without fire'. The fact that someone can read this outlet and believe it's true is what scares me the most."
Potentially far-reaching consequences
Unfortunately, it also appears that OpenAI is neither interested nor able to seriously correct the false information in ChatGPT. noyb filed the first complaint regarding hallucinations in April 2024. At that time, we requested the correction or deletion of an incorrect birth date of a public figure. OpenAI simply claimed that it could not correct the dates. Instead, it can only "block" the data for certain challenges, but the false information still remains in the system. Although the damage caused may be more limited if the false personal information is not shared, the GDPR applies to internal data as much as to shared data. In addition, the company attempts to circumvent its data accuracy obligations by showing ChatGPT users a warning that the tool "may make errors" and that they should "check important information." However, the legal obligation to ensure the accuracy of the personal data processed cannot be circumvented through a disclaimer.
Cleanthi Sardeli, data protection lawyer at NOYB: "Adding a disclaimer that you are not following the law does not make the law go away. Also, AI companies can't just 'hide' false information from users while still processing false information internally. AI companies should stop acting as if the GDPR does not apply to them, even though it clearly does. If the delusions don't stop, they can easily suffer reputational damage."
ChatGPT is now officially a search engine
Since the incident involving Arve Hjalmar Holmen, OpenAI has updated its model. ChatGPT now also searches for information about people on the internet when asked who they are. Fortunately for Arve Hjalmar Holmen, this means that ChatGPT has stopped lying about him being a murderer. However, incorrect data can still remain part of the LLM data file. By default, ChatGPT feeds user data back into the system for training purposes. This means that there is no way for an individual to be completely sure that this output can be completely erased, according to the current state of AI knowledge, unless the entire AI model is retrained. Conveniently, OpenAI does not even meet the right of access under Article 15 of the GDPR, making it impossible for users to ascertain what they are processing in their internal systems. This fact understandably continues to cause the complainant anxiety and fear.
Complaint lodged in Norway
NOYB has therefore filed a complaint with the Norwegian Datatilsynet. By knowingly allowing its AI model to generate derogatory outputs about users, OpenAI is in breach of the data accuracy principle under Article 5(1)(d) of the GDPR. NOYB requests Datatilsynet to order OpenAI to remove the defamatory outputs and to fine-tune its model to remove the inaccurate results. Finally, NOYB suggests that the DPA impose an administrative fine to prevent similar breaches in the future.
NOYB, which stands for None of Your Business, is an Austrian consumer organisation based in Vienna. It was founded in 2017 and is dedicated to privacy protection across Europe.
NOYB/ gnews.cz - RoZ