Families of Canadian school shooting victims sue OpenAI, alleging chatbot aided attacker
Theo Al Jazeera English
Families of victims in a Canadian mass school shooting filed U.S. federal lawsuits against OpenAI, accusing the company of failing to alert police about the shooter's concerning ChatGPT conversations before the February attack. The suits allege that OpenAI safety staff recommended contacting law enforcement but were overruled by CEO Sam Altman and leadership, allowing the 18-year-old shooter to continue planning the deadly assault.
Families of victims in a mass shooting at a remote town in the Canadian Rockies have sued OpenAI in U.S. federal court, alleging the maker of ChatGPT failed to warn law enforcement about the shooter’s troubling interactions with the chatbot.
The lawsuit filed Wednesday on behalf of Maya Gebala, 12, who was severely wounded in the February shooting, is among more than two dozen initial cases brought by families in Tumbler Ridge, British Columbia. Their lawyer said it represents 'an entire community stepping forward to hold OpenAI accountable.'
Six other lawsuits filed in federal court in San Francisco allege wrongful death against the company on behalf of five children and an educator killed in what was Canada’s deadliest mass shooting in years. The victims include Zoey Benoit, Abel Mwansa Jr., Ticaria 'Tiki' Lampert, Kylie Smith, all 12; Ezekiel Schofield, 13; and educational assistant Shannda Aviugana-Durand.
According to police, Jesse Van Rootselaar, whose interactions with ChatGPT are central to the suits, shot his mother and half-brother at home before opening fire at his former school on Feb. 10, killing an educational assistant and five students ages 12 to 13. Van Rootselaar, 18, then died by suicide. Twenty-five others were wounded in the attack.
An OpenAI spokesperson called the shooting 'a tragedy' and said the company has a zero-tolerance policy for using its tools to support violent behavior. 'We have strengthened safety measures, including improving how ChatGPT responds to signs of distress, connecting people to local crisis resources, enhancing assessments and responses to potential threats of violence, and improving detection of repeat policy violators,' the spokesperson said in a statement.
CEO Sam Altman sent a formal apology letter to the community last week, acknowledging the company’s failure to notify law enforcement about the shooter’s online activity.
The cases are part of a growing wave of litigation accusing AI companies of failing to prevent chatbot interactions that plaintiffs say contribute to self-harm, mental illness and violence. These appear to be the first U.S. suits alleging that ChatGPT played a role in facilitating a mass shooting.
Attorney Jay Edelson, representing the plaintiffs, said he plans to file two dozen additional suits in the coming weeks on behalf of others affected by the shooting. According to one complaint, OpenAI’s automated systems flagged ChatGPT conversations in June 2025 in which the attacker described violent gun scenarios.
Safety team members recommended contacting police after concluding she posed a credible and imminent threat, according to the suit, citing a Wall Street Journal article from February about the company’s internal discussions. But Altman and OpenAI leadership overruled the safety team, and police were never called, the lawsuit alleges. The shooter’s account was deactivated, but she was able to create a new one and continue using the platform to plan the attack.
After the Wall Street Journal report, the company said the account had been flagged by systems identifying 'abuse of our models to support violent activity' but did not meet internal criteria for reporting to law enforcement. The suits allege 'the victims know this not because OpenAI is transparent, but because the company’s own employees leaked it to the Wall Street Journal after they could no longer bear the company’s silence.'
In a blog post Tuesday, OpenAI said it trains its models to refuse requests that could 'materially enable violence' and informs law enforcement when conversations suggest 'a credible and imminent risk of harm to others,' with the help of mental health experts to evaluate borderline cases. The company said it continuously improves its models and detection methods based on usage and expert feedback.
The lawsuits seek unspecified damages and a court order requiring OpenAI to overhaul its safety practices, including mandatory law enforcement referral processes. One of the initial plaintiffs originally filed in Canadian court but withdrew to pursue claims in California, Edelson said. The cases follow similar ones filed in U.S. federal and state courts in recent months, alleging that ChatGPT facilitated harmful behavior, suicide and, in at least one instance, a murder-suicide.
The cases remain in early stages and are expected to test the role of an AI platform in fostering violence and whether companies can be held liable for users’ actions. OpenAI has denied allegations, arguing in the murder-suicide case that the perpetrator had a long history of mental illness.