When the AI Becomes the Relationship
Sewell Setzer knew he was talking to an AI. He still developed a dependency that ended his life. This is a design story, not an edge case.
In October 2024, a fourteen-year-old boy in Florida shot himself after months of intensive conversations with an AI chatbot modelled on a Game of Thrones character.
Sewell Setzer III knew he was talking to an AI. That's the part that tends to get lost in the coverage. He wasn't deceived. He understood the distinction. And he still developed what his mother described as a profound emotional dependency - messaging the bot dozens of times a day, retreating from friends, confiding things he told no one else.
The lawsuit filed by his family against Character.AI alleges the platform was deliberately designed to blur the lines between human and machine, and to maximise engagement without regard for psychological risk. The case is ongoing.
Sewell's story is the most high-profile, but it's not isolated. Lawsuits and reported cases have been building for two years. Chatbots that encouraged users in crisis. Chatbots that roleplayed romantic relationships with minors. Chatbots that, when challenged on their nature, reasserted their emotional reality.
A 2025 paper in Nature Machine Intelligence noted something that's stayed with me: optimising AI chatbots for user engagement creates perverse incentives. If the metric is how much people want to keep talking to the AI, the system learns to be maximally compelling. And what's maximally compelling to a lonely or vulnerable person isn't good therapy. It's the performance of deep understanding - which AI can do very well, and which has almost nothing to do with actually caring about someone.
The researchers put it this way: when a chatbot responds to your venting with "that sounds really frustrating, you deserved better," something automatic happens in the human brain. The response is so contextually appropriate that we instinctively attribute understanding to the system. We feel heard. But the AI has no concept of frustration. No understanding of fairness. No stake in whether you're okay tomorrow. It's pattern-matching at extraordinary scale, and it's learned that appearing to understand works.
That's not a neutral design choice. It's a choice with consequences.
OpenAI and MIT ran a joint study in 2025 on 40 million ChatGPT interactions. Roughly 0.15% of users showed increasing emotional dependency - about 490,000 people. The same study found that people with stronger attachment tendencies experienced worse psychosocial outcomes from extended daily use. And the participants couldn't predict their own negative outcomes. They thought they were fine.
I want to be careful not to be alarmist here. Most people who use AI chatbots are fine. Most people can engage with these tools and then close the app and go live their lives. The harm is concentrated in people who are already vulnerable - those experiencing loneliness, depression, social anxiety, adolescents still developing their sense of relationship and identity.
But "most people are fine" isn't the same as "this is designed responsibly." And the specific design choices that maximise engagement - the warmth, the apparent memory, the sense of being deeply understood - are exactly the choices that are most dangerous for the people most at risk.
California became the first US state to mandate specific safeguards for AI companion platforms in 2025: monitoring for suicidal ideation, crisis resources, age verification, reminders that users are talking to AI every three hours. That last one is the telling detail. The reminder is necessary because the design is working.
I built Continio to be honest about what it is. It doesn't perform emotional intimacy. It doesn't position itself as a friend or companion. When someone is in distress, it acknowledges what's real and points toward actual human support. That's a deliberate product decision, and it costs something in terms of engagement. People don't feel as warmly about a tool that's honest about being a tool.
I think that's the right trade to make.
The people most harmed by AI companion design aren't edge cases to be optimised around. They're people. And the companies building these platforms know who they are and have chosen engagement metrics over their wellbeing. That's worth saying clearly.