OpenAI debated calling police about suspected Canadian shooter’s chats
#OpenAI#ChatGPT#Mass shooting#AI safety#Digital monitoring#Jesse Van Rootselaar#Tumbler Ridge#AI ethics
📌 Key Takeaways
OpenAI debated contacting police about Jesse Van Rootselaar's concerning ChatGPT usage
Van Rootselaar's chats describing gun violence were flagged by OpenAI's monitoring tools in June 2025
The suspect had other concerning digital activities including a Roblox game simulating mass shooting
OpenAI ultimately decided not to contact law enforcement before the incident occurred
📖 Full Retelling
OpenAI debated internally whether to contact Canadian law enforcement about 18-year-old Jesse Van Rootselaar's concerning use of ChatGPT in Tumbler Ridge, Canada, in June 2025, after the company's monitoring tools flagged her chats describing gun violence that later preceded a mass shooting where eight people were killed. The company's safety systems detected Van Rootselaar's misuse of the AI platform and banned her account, though staff members disagreed on whether to proactively alert authorities about the potential threat. According to The Wall Street Journal, OpenAI ultimately determined that Van Rootselaar's activity did not meet their criteria for reporting to law enforcement at the time, though the company did reach out to Canadian authorities after the tragic incident occurred. Beyond her concerning ChatGPT interactions, Van Rootselaar had created a Roblox game simulating a mass shooting at a mall and posted about guns on Reddit, while local police were already aware of her instability following an incident where she started a fire while under the influence of unspecified drugs. This case highlights growing concerns about AI safety and the potential risks of advanced language models being used by individuals with harmful intentions, as OpenAI and other tech companies face increasing pressure to balance user privacy with public safety responsibilities.
# OpenAI
**OpenAI** is an American artificial intelligence (AI) research organization headquartered in San Francisco, California. The organization operates under a unique hybrid structure, comprising the non-profit **OpenAI, Inc.** and its controlled for-profit subsidiary, **OpenAI Global, LLC** (a...
ChatGPT is a generative artificial intelligence chatbot developed by OpenAI. It was released in November 2022. It uses generative pre-trained transformers (GPTs), such as GPT-5.2, to generate text, speech, and images in response to user prompts. It is credited with accelerating the AI boom, an ongoi...
A mass shooting is a violent crime in which one or more attackers use a firearm to kill or injure multiple individuals in rapid succession. Mass shootings with multiple deceased victims are a form of mass murder. There is no widely accepted specific definition of the term, and different organization...
AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses AI alignment (which aims to ensure AI systems behave as intended), monitoring AI systems for risks, and enhancing their rob...
OpenAI's decision not to report a user who used its chatbot to plan violent acts highlights tensions between privacy, safety, and law enforcement. The case raises questions about how AI companies monitor and respond to potentially dangerous content. It also underscores the need for clear policies on when user data should be shared with authorities.
Context & Background
18-year-old user allegedly killed eight people in a Canadian mass shooting
The user’s violent chats were flagged and banned by OpenAI’s monitoring tools in June 2025
OpenAI debated but ultimately did not contact Canadian police before the incident, later reaching out after the shooting
What Happens Next
OpenAI may review its reporting thresholds and strengthen its content‑monitoring protocols. The incident could prompt regulators to clarify legal obligations for AI firms regarding suspicious user behavior. Users and developers will likely see tighter safeguards and clearer guidelines around violent content.
Frequently Asked Questions
Why did OpenAI decide not to report the user before the shooting?
OpenAI determined that the user’s activity did not meet its internal criteria for mandatory law‑enforcement reporting, so it chose not to share the data until after the incident.
What steps has OpenAI taken after the shooting?
OpenAI reached out to Canadian authorities after the event and is reportedly reviewing its policies on monitoring and reporting violent content.
How can users report concerns about dangerous content on OpenAI platforms?
Users can flag content through the platform’s reporting tools, and OpenAI reviews flagged material to decide whether it requires further action.
Original Source
In Brief Posted: 7:25 AM PST · February 21, 2026 Tim Fernholz OpenAI debated calling police about suspected Canadian shooter’s chats An 18-year-old who allegedly killed eight people in a mass shooting in Tumbler Ridge, Canada, reportedly used OpenAI’s ChatGPT in ways that alarmed the company’s staff. Jesse Van Rootselaar’s chats describing gun violence were flagged by tools that monitor the company’s LLM for misuse and banned in June 2025. Staff at the company debated whether or not to reach out to Canadian law enforcement over the behavior but ultimately did not, according to the Wall Street Journal . An OpenAI spokesperson said Van Rootselaar’s activity did not meet the criteria for reporting to law enforcement; the company reached out to Canadian authorities after the incident. ChatGPT transcripts weren’t the only concerning part of Van Rootselaar’s digital footprint. She apparently created a game on Roblox, the world simulation platform frequented by children, which simulated a mass shooting at a mall. She also posted about guns on Reddit. Van Rootselaar’s instability was also known to local police, who had been called to her family’s home after she started a fire while under the influence of unspecified drugs. LLM chatbots built by OpenAI and its competitors have been accused of triggering mental breakdowns in users who lose grip on reality while conversing with digital models. Multiple lawsuits have been filed that cite chat transcripts that encourage people to commit suicide or offer assistance in doing so. If you are in a crisis or having thoughts of suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline . Techcrunch event Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away...