March 23, 2025

ikayaniaamirshahzad@gmail.com

Man Annoyed When ChatGPT Tells Users He Murdered His Children in Cold Blood



When it comes to the life of tech, generative AI is still just an infant. Though we’ve seen tons of AI hype, even the most advanced models are still prone to wild hallucinations, like lying about medical records or writing research reports based on rumors.

Despite these flaws, AI has quickly wormed its way into just about every part of our lives, from the internet to journalism to insurance — even into the food we eat.

That’s had some pretty alarming consequences, as one Norwegian man discovered this week. Curious what OpenAI’s ChatGPT had to say about him, Arve Hjalmar Holmen typed in his name and let the bot do its thing. The results were horrifying.

According to TechCrunch, ChatGPT told the man he had murdered two of his sons and tried to kill a third. Though Holmen didn’t know it, he had apparently spent the past 21 years in prison for his crimes — at least according to the chatbot.

And though the story was clearly false, ChatGPT had gotten parts of Holmen’s life correct, like his hometown, as well as the age and gender of each of his kids. It was a sinister bit of truth layered into a wild hallucination.

Holmen took this info to Noyb, a European data rights group, which filed a complaint with the Norwegian Data Protection Authority on his behalf. Noyb likewise filed a lawsuit against OpenAI, the parent company behind ChatGPT. Though ChatGPT is no longer repeating these lies about Holmen, Noyb is asking the agency to “order OpenAI to delete the defamatory output and fine-tune its model to eliminate inaccurate results” — a nearly impossible task.

But that’s likely the point. Holmen’s fake murder ordeal highlights the rapid pace at which generative AI is being imposed on the world, consequences be damned. Data researchers and tech critics have argued that big tech’s profit-driven development cycles prioritize models that seem to do everything, rather than practical models that actually work.

“In this age of trying to say that you’ve built a machine God, [they’re] using this one big hammer for any task,” said Distributed AI Research Institute founder Timnit Gebru on the podcast Tech Won’t Save Us earlier this month. “You’re not building the best possible model for the best possible task.”

Though there are regulations — in Norway, anyway — mandating that AI companies must correct or remove false info hallucinated by AI, these reactive laws do little to protect individuals from hallucinations in the first place.

That’s already having devastating consequences as the under-developed tech is used by less scrupulous actors to manufacture consent for their actions. Scholars like Helyeh Doutaghi are faced with the loss of their jobs thanks to allegations generated by AI, and right-wing regimes are using AI weapons tech to evade responsibility for war crimes.

As long as big tech continues to roll out hyped up AI faster than lawmakers can regulate it, people around the world will be forced to live with the consequences.

More on AI: Police Use of Facial Recognition Backfires Spectacularly When It Renders Them Unable to Convict Alleged Murderer



Source link

Leave a Comment