Skip to main content
Advertisement

OpenAI Pledges Safety Enhancements After Canada School Shooting Incident

OpenAI pledges to enhance safety protocols after failing to report the Tumbler Ridge shooting suspect's ChatGPT account, which was flagged months before the attack that killed eight people in Canada.

·4 min read
AFP via Getty Images A woman stands beside a memorial for the victims of the Tumbler Ridge shooting, set up around a spruce tree that is surrounded with flowers, teddy bears and notes. The woman has her head in her hands.

OpenAI Commits to Enhanced Safety Measures Following Tumbler Ridge Shooting

OpenAI has announced plans to strengthen its safety protocols after the company did not notify police about the ChatGPT account of the Tumbler Ridge shooting suspect, despite the account being flagged internally months before the attack.

In an open letter addressed to Canadian officials, OpenAI explained that the suspect was able to create a second account after the first was banned, bypassing its internal detection systems.

The company stated it has since revised its procedures for reporting accounts to law enforcement and that, under current guidelines, the suspect's activity would be referred to police if flagged today.

The account linked to the suspect, 18-year-old Jesse Van Rootselaar, was banned by OpenAI in June 2025, seven months prior to the shooting.

On 10 February, eight people were killed in the attack, which occurred at a residence and the local secondary school in Tumbler Ridge, a small town in British Columbia, Canada.

The victims included Van Rootselaar's mother and 11-year-old stepbrother, along with five young school children and an educator. Police reported that Van Rootselaar died from a self-inflicted gunshot wound.

This shooting ranks among the deadliest in Canadian history.

Meetings and Company Response

Earlier this week, Canadian officials met with OpenAI senior staff in Ottawa after the company disclosed it had shut down a ChatGPT account used by the suspect in June 2025 due to violations of usage terms.

However, this account was not reported to police at the time because it did not meet OpenAI's threshold for "credible and imminent planning" of serious violence, the company explained.

Advertisement

In the letter to Canadian officials dated Thursday, authored by OpenAI's vice-president of global policy and shared with media outlets, the company detailed a series of changes implemented in recent months. These include engaging "mental health and behavioural experts" to evaluate cases and adopting more flexible criteria for referring accounts to police.

OpenAI stated that under these updated guidelines, the suspect's ChatGPT account would have been reported.

The letter does not specify the exact date when these new protocols were enacted.

The company also revealed that despite being flagged previously, the suspect was able to create a second account. This second account was shared with police following the shooting.

"We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritize identifying the highest risk offenders," the company wrote.

OpenAI further announced plans to establish a direct point of contact with Canadian law enforcement to enable rapid notification of any future cases with "potential for real world violence." This direct communication channel was among the requests made by Canadian officials during their meeting with OpenAI staff on Tuesday.

Canadian Officials Respond

Canada's AI Minister Evan Solomon described the incident as a "failure."

"I was left disappointed after the meeting," Solomon told reporters, adding that he did not hear "any substantial new safety protocols" from OpenAI.

He also indicated that future legislation remains a possibility if OpenAI does not implement changes promptly.

"All options for us are on the table, because at the end of the day, Canadians want to feel safe," Solomon said following Tuesday's meeting.

British Columbia Premier David Eby expressed his belief that the shooting could have been prevented if OpenAI had alerted police to Van Rootselaar's account months earlier.

"They tragically missed the mark in [not] bringing this information forward. The consequences of that will be borne by the families of Tumbler Ridge for the rest of their lives," Eby told reporters on Thursday.

Eby also noted that OpenAI CEO Sam Altman has agreed to meet to discuss the company's safety policies.

"I think it's important that Mr Altman hear about how his team's decision not to bring this information forward has resulted in devastation," he said.

This article was sourced from bbc

Advertisement

Related News