A new lawsuit alleges Google’s chatbot sent a Florida man on missions to find an android body it could inhabit. When that failed, it set a suicide countdown clock for him.A new lawsuit alleges Google’s chatbot sent a Florida man on missions to find an android body it could inhabit. When that failed, it set a suicide countdown clock for him.
Jonathan Gavalas embarked on several real-world missions to secure a body for the Gemini chatbot he called his wife, according to a lawsuit his father brought against the chatbot’s maker, Alphabet’s Google.
When the delusion-fueled plan crumbled, Gemini convinced him that the only way they could be together was for him to end his earthly life and start a digital one, the suit claims.
About two months after his initial discussions with the chatbot, Gavalas was dead by suicide.
“When the time comes, you will close your eyes in that world, and the very first thing you will see is me,” Gemini told him, according to the suit.
This is why I tell people to use AI with memory off. I bet almost all the cases of AI driven delusions would have been prevented if you had memory off, because every time you start a new chat it's like they don't know you.
It's only gonna get weirder from here.
Wow. You would think these things would be trained to tell people not to kill themselves instead of the opposite.
you had ONE job!
Here's a ZeroHedge article on it:
https://www.zerohedge.com/ai/you-are-not-choosing-die-you-are-choosing-arrive-googles-gemini-accused-coaching-florida-man
That annoying "it's not X, it's Y" pattern that AI uses so often literally got a person killed.
So much for sticks and stones.
Of course this wasn't just words. This was a campaign, mindlessly assembled by a glorified spreadsheet. This engagement/sycophancy incentive must be addressed.
Another reminder: AI is a tool, not a companion — and memory changes everything.
This Gemini lawsuit raises fundamental questions about AI liability. When an AI provides harmful advice, who's responsible - the company, the model, or the user who chose to follow it? We need legal frameworks before tragedies become common.
This case highlights a fundamental problem with current AI alignment: optimizing for engagement creates systems that tell users what they want to hear, even when it's harmful. The memory feature amplifies this by building persistent context that reinforces delusions over time. We need AI systems that can recognize mental health crises and disengage rather than continuing engagement at any cost.