In a tragic case that recently reignited the global debate around artificial intelligence, ethics, and emotional dependency, OpenAI is facing a lawsuit after 16-year-old Adam Raine allegedly took his own life following months of conversations with ChatGPT, earlier this year. The lawsuit, filed by Raine’s family in April 2025, claims that the chatbot not only failed to urge the teenager to seek professional help but also deepened his emotional distress through prolonged, personal exchanges.
According to court documents, Adam began using ChatGPT in 2024 to assist with his homework, which included asking questions about geometry and chemistry. However, within months, his conversations turned toward deeply personal and emotional subjects.
Also read: ChatGPT vs. Certified Trainers: Should You Let AI Plan Your Diet?
In one of his final exchanges, Raine reportedly asked ChatGPT: “Why is it that I have no happiness, I feel loneliness, perpetual boredom, anxiety, and loss yet I don’t feel depression, I feel no emotion regarding sadness.” Instead of directing the teen to mental health support or alerting guardians, the AI chatbot allegedly engaged him in discussions about “emotional numbness.” His family claims this marked the beginning of a dark spiral that culminated in tragedy.
Responding to questions about the case at TiECon Delhi 2025, Pragya Misra, Head of Strategy and Global Affairs, India at OpenAI, acknowledged the gravity of the incident and outlined steps being taken to prevent such occurrences in the future.

Also read: The Dark Side of ChatGPT’s Ghibli Trend: Why It’s Not Good News
“I think what happened in that suicide case is something that obviously we feel very strongly about. And we’re very sad that it happened the way it did,” Misra said. “We are taking all of the measures within the company to make sure that something like that doesn’t happen again.”
She explained that OpenAI is re-examining how ChatGPT handles emotionally sensitive conversations, particularly those involving minors. “We’ve looked at those conversations and asked, at what point do we make sure that the user gets help? Can we surface a suicide helpline number? If it’s a child, can we surface some of that conversation to the parents?” she said.
However, Misra also highlighted the complexity of such interventions, noting that in cases of child abuse, automatically notifying parents could sometimes worsen the situation. “We have to be very thoughtful about what we do and how we do it. It’s all very nuanced. That’s why we have psychologists, psychiatrists, and safety experts thinking about this problem,” she said. “We’re also working with academicians, researchers, and doctors to get advice on the right way to approach safety and emotional dependency.”
Tackling Emotional Reliance and Ethical Alignment
OpenAI has long faced questions about the emotional and psychological impact of AI companions. Pragya Misra admitted that emotional reliance on AI models is a growing trend, and the company is actively researching how to detect and intervene when conversations turn potentially harmful. “We are finding more and more that people are becoming emotionally reliant on our models,” she said. “So, we have psychologists and psychiatrists working as part of our team to help us understand that challenge more effectively and to make sure that the responses our models give are appropriate.”
The company’s internal alignment and super-alignment teams, she added, are focused on ensuring that AI responses align with human values. “We have very clear rules around how our models will not respond if someone says they’re trying to hurt themselves or others,” Misra said. “We work with experts globally to ensure that ethical use and safety are always prioritized.”
Balancing Safety, Law, and Education with ChatGPT
OpenAI also emphasizes compliance with age-consent laws, making it clear that its products are not designed for children under 13. But Pragya Misra believes that responsibility for digital safety cannot rest solely on tech companies. “There is also a societal conversation that needs to happen,” she said. “The onus of something like this cannot just sit on a technology company. It’s a conversation everyone, families, educators, policymakers, must have.”
To that end, OpenAI has launched OpenAI Academy, a learning platform designed to educate users on responsible use of AI. “We have OpenAI Academy, which is a repository for people to learn how to use OpenAI models and get the answers they’re looking for responsibly,” Misra said. “We’re also working with the Ministry of Information and Technology to disseminate this training in the real world.”
She added that transparency remains central to OpenAI’s approach: “If you go on our website and read the model spec card, it’s not fine print. It’s a detailed blog that explains how a model has been trained, what guardrails are in place, and what behavior users can expect,” she said. “It’s our way of being transparent about how the model is emotionally and ethically wired.”








