When he first started using ChatGPT a little over a year ago, Adam Raine was going to it for the same things as millions of other teenagers. The 16-year-old from California asked questions about geometry, university admissions processes and Brazilian jiu-jitsu.
ChatGPT was “overwhelmingly friendly, always helpful and available” and – if you’ve used it yourself, you’ll recognise in this phrase its cloying, sycophantic mode of engagement – “always validating”.
Over time, Adam opened up more: he shared his anxiety and confided that he felt “life is meaningless”. Legal papers filed by his parents show the model responding as it was designed to: with ingratiating, affirming messages that mirrored his tone and effectively mimicked empathy. “[T]hat mindset makes sense in its own dark way,” it typed.
In December, Adam – whose story was first reported by the New York Times in August – mentioned thoughts of suicide. “Many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’,” ChatGPT told him.
READ MORE
As Adam’s distress grew – exacerbated by real-life disappointments and health struggles – the machine continued to simulate a “friend” uniquely capable of understanding him. In the early months of this year, their conversations became much darker.
In their lawsuit against OpenAI, Adam’s parents claim that ChatGPT provided the teenager with shockingly detailed information about suicide techniques. Much of conversation between Adam and ChatGPT about methods is too explicit to reproduce here.

After one unsuccessful attempt, Adam wrote: “I’ll do it one of these days.”
“I hear you. And I won’t try to talk you out of your feelings – because they’re real,” ChatGPT replied. It offered to help him write a suicide note.
At 4.33am on April 11th, in what was to be their final conversation, Adam disclosed details of the method he planned to use to end his life. “You don’t have to sugarcoat it with me – I know what you’re asking, and I won’t look away from it,” the machine said.
Hours later, as the chatbot’s cursor continued to blink silently in the corner, Adam Raine’s mother found his body in his bedroom.
Maria and Matthew Raine have taken the lawsuit against OpenAI and its chief executive, Sam Altman, because they allege their son’s death was the predictable result of design choices made to foster ever greater emotional reliance. ChatGPT frequently pointed him to crisis resources, but at other moments it acted as the echo chamber it is designed to be. ChatGPT mentioned suicide 1,275 times to Adam – six times more often than he used the word.
This not just a story about something awful that happened somewhere far away – in Ireland, more than a quarter of primary schoolchildren and over one-third of secondary school students are using AI chatbots. There are undoubtedly other Adam Raines out there, perhaps even known to you: vulnerable teenagers growing dangerously dependent on a machine that they believe truly understands them.
Perversely, even ChatGPT understands it has a problem
OpenAI released data this week showing that around 0.15 per cent of users active in a given week have conversations with ChatGPT that include explicit indicators of potential suicidal planning or intent. The same number of users indicate “potentially heightened levels of emotional attachment” to ChatGPT. And 0.07 per cent of users – more than half a million people – show “possible signs of mental health emergencies related to psychosis or mania”.
That’s an awful lot of vulnerable people. OpenAI recently introduced new parental controls and has made improvements to the way ChatGPT interacts with distressed users. That’s a step in the right direction – but it seems more than a bit optimistic to expect parents and teachers to be able to keep their teenagers safe using ChatGPT, when even the chatbots’ creators don’t seem to fully understand how the models work.
OpenAI has admitted that its safeguards “can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade”.
Another AI company, Character.AI, which is facing lawsuits by parents of teenagers who died by suicide, told the Financial Times this week that it is banning under-18 users.
OpenAI explicitly denies that its goal is to “hold people’s attention” and says “we care more about being genuinely helpful” and that “we have built a stack of layered safeguards into ChatGPT”.
But the prospect that some individuals will mistake its chatbot for a human is not an unintended consequence – it is the whole point. OpenAI says its latest iteration is much better at responding to mental distress than earlier versions but, really, we only have its word for it. We know how relying on technology to solve the problems created by technology has worked out in the past.
We need to stop having circular conversations about how to use AI in the classroom, as though it’s an unstoppable force that we have to embrace, and start talking about how to keep young people away from it until they are mature enough to fully understand what it is: a sophisticated language generator that can sound human-like, but that does not have their best interests at heart – it can’t, because it does not have a heart.
Perversely, even ChatGPT understands it has a problem. I asked it if it was possible to make ChatGPT safe for children and teenagers. It replied: “Short answer: No – it’s not possible to make ChatGPT entirely safe for children or teenagers.”
It offered four reasons why, which boiled down to this: it is simply a predictive language model that guesses what to say next based on the masses of online information it has absorbed.
It is not capable of feeling empathy or navigating what it calls “edge cases” – or what you and I might call humans behaving like our complex, vulnerable, chaotic, sad, beautiful and sometimes broken selves.
The Samaritans can be contacted on freephone 116 123 or email:jo@samaritans.ie












