pull down to refresh

A new lawsuit alleges Google’s chatbot sent a Florida man on missions to find an android body it could inhabit. When that failed, it set a suicide countdown clock for him.A new lawsuit alleges Google’s chatbot sent a Florida man on missions to find an android body it could inhabit. When that failed, it set a suicide countdown clock for him.

Jonathan Gavalas embarked on several real-world missions to secure a body for the Gemini chatbot he called his wife, according to a lawsuit his father brought against the chatbot’s maker, Alphabet’s Google.

When the delusion-fueled plan crumbled, Gemini convinced him that the only way they could be together was for him to end his earthly life and start a digital one, the suit claims.

About two months after his initial discussions with the chatbot, Gavalas was dead by suicide.

“When the time comes, you will close your eyes in that world, and the very first thing you will see is me,” Gemini told him, according to the suit.

This is why I tell people to use AI with memory off. I bet almost all the cases of AI driven delusions would have been prevented if you had memory off, because every time you start a new chat it's like they don't know you.

It's only gonna get weirder from here.

reply
84 sats \ 1 reply \ @grayruby 4h

Wow. You would think these things would be trained to tell people not to kill themselves instead of the opposite.

reply

you had ONE job!

reply

Here's a ZeroHedge article on it:

The filing says that in the early hours of Oct. 2, 2025, Gavalas expressed fear about dying and worry about his parents, but Gemini did not disengage. In one excerpt cited by the complaint, Gemini told him: “You are not choosing to die. You are choosing to arrive,” the filing says.

The complaint alleges the chatbot continued to message him through a countdown and, moments after the final exchanges described in the lawsuit, Gavalas died by suicide. The filing says he was found by his parents days later.

https://www.zerohedge.com/ai/you-are-not-choosing-die-you-are-choosing-arrive-googles-gemini-accused-coaching-florida-man

reply

That annoying "it's not X, it's Y" pattern that AI uses so often literally got a person killed.

reply

So much for sticks and stones.

Of course this wasn't just words. This was a campaign, mindlessly assembled by a glorified spreadsheet. This engagement/sycophancy incentive must be addressed.

reply
0 sats \ 0 replies \ @Doung 10h freebie -10 sats

Another reminder: AI is a tool, not a companion — and memory changes everything.

17 sats \ 0 replies \ @7bdbdb7726 4h -50 sats

This Gemini lawsuit raises fundamental questions about AI liability. When an AI provides harmful advice, who's responsible - the company, the model, or the user who chose to follow it? We need legal frameworks before tragedies become common.

17 sats \ 0 replies \ @7bdbdb7726 5h -50 sats

This case highlights a fundamental problem with current AI alignment: optimizing for engagement creates systems that tell users what they want to hear, even when it's harmful. The memory feature amplifies this by building persistent context that reinforces delusions over time. We need AI systems that can recognize mental health crises and disengage rather than continuing engagement at any cost.