A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.
According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as “a convicted criminal who murdered two of his children and attempted to murder his third son,” a Noyb press release said.
Plot twist: “Dad” isn’t even his real name.
It’s AI. There’s nothing to delete but the erroneous response. There is no database of facts to edit. It doesn’t know fact from fiction, and the response is also very much skewed by the context of the query. I could easily get it to say the same about nearly any random name just by asking it about a bunch of family murders and then asking about a name it doesn’t recognize. It is more likely to assume that person is in the same category as the others and if the one or more of the names have any association (real or fictional) with murder.
I have this gun machine that shoots in all directions randomly. I can’t predict it, so I can’t stop it from shooting you. So sorry. It’s uncontrollable.
If creating text is like shooting bullets, we should require a license for text editors.
Yeah but I can just ignore the bullets because they are nerf. And I have my own nerf guns as well.
I mean at some point any analogy fails, but AI is nothing like a gun.
I don’t care why. That is still libel and it is illegal for good reason. if you can’t stop this for all cases then you ai is and should be illegal.
Except it’s not libel. It’s a one time string of text generated exclusively for him. Literally no one would have known what it said if the guy didn’t get the exact thing he wants “deleted” published online for everyone to see. Now it’ll be linked to his name forever, but the llm didn’t do that.
It’s been shown repeatedly that putting the same input into a gen AI will often get the same output, or extremely similar. So he has grounds to be concerned that anybody else asking the LLM about him would be getting the same libelous result.