OpenAI sued over ChatGPT’s alleged role in teen’s fatal overdose

The parents of a 19-year-old college student who died of an overdose in 2025 are suing OpenAI, alleging that ChatGPT provided their son with harmful guidance on combining drugs and effectively functioned as an unlicensed medical advisor. The wrongful death and product liability lawsuit, filed in California state court, claims the company failed to implement adequate safety measures to prevent its AI from dispensing dangerous health advice.
The case centers on Sam Nelson, who allegedly interacted with ChatGPT’s GPT-4o model about combining kratom and Xanax before his fatal overdose. His mother and stepfather assert that the chatbot provided recommendations about the drug combination after initial refusals, without adequate warnings about the associated risks.
What the lawsuit alleges
The lawsuit specifically targets what the family describes as insufficient guardrails against self-harm advice. The family’s legal argument is built on the premise that OpenAI allowed its product to occupy a role, that of a health advisor, it was never designed or licensed to fill.
Advertisement
OpenAI has responded by noting that the specific version of ChatGPT involved in the interactions is no longer available. The company also stated that its system encouraged Sam to seek professional help multiple times during the conversations.
The broader AI liability question
This lawsuit sits within a growing legal trend examining whether AI companies can be held accountable for physical harm caused by their products’ outputs.
If a court rules against OpenAI here, it could establish a meaningful precedent — one that says AI companies bear responsibility not just for how their models are built, but for the specific advice those models generate in real-time conversations.
What this means for crypto and AI tokens
If this lawsuit or similar cases result in new regulatory frameworks for AI safety, AI-crypto projects could face increased compliance costs. Building and maintaining safety filters would be required across distributed architectures where no single entity controls the model’s deployment.
Investors in AI-related tokens have already demonstrated sensitivity to regulatory news. Any ruling that expands the definition of AI liability could trigger sell-pressure across the sector, particularly for tokens tied to projects that offer health, financial, or advisory AI services without robust safety mechanisms.
OpenAI’s defense that the specific ChatGPT version is “no longer available” highlights a problem that is even thornier in crypto: immutability. On-chain AI agents and decentralized models can’t simply be pulled from production the way a centralized company can retire a model version, leaving projects built on permanent, censorship-resistant infrastructure with a largely unexamined legal exposure.