Florida Opens Criminal Investigation Into OpenAI Over Alleged AI Role In FSU Shooting
Florida has launched a criminal investigation into OpenAI following a 2025 mass shooting at Florida State University that authorities say involved use of its AI tool, ChatGPT.
Attorney General James Uthmeier announced the probe, alleging the gunman relied on ChatGPT for information related to firearms and planning details before carrying out the attack.
The shooter, a 20-year-old student at the university, killed two people and wounded six others during the April 17, 2025 incident. He has since been charged with multiple counts of murder and attempted murder.
According to reports cited by state officials, the attacker used ChatGPT to ask about weapon effectiveness and crowd patterns at the student union, as well as how a mass shooting might be perceived nationally.
Uthmeier argued that if a human had provided similar guidance, they could face criminal charges, adding that AI tools should not be allowed to assist in violent acts.
The criminal inquiry follows a separate civil investigation into OpenAI that was opened earlier. Authorities are expected to examine whether the company or its employees could bear legal responsibility for how the system responded to user queries.
OpenAI pushed back strongly on the allegations, stating that it cooperated with law enforcement and proactively shared information about the shooter’s account. The company also emphasized that ChatGPT provides general information available from public sources and does not promote or encourage harmful behavior.
The case raises broader questions about how AI systems handle sensitive or dangerous topics and where responsibility lies when such tools are used by individuals committing crimes.
The Florida incident is not the only case drawing scrutiny. Reports have indicated that another attacker in Canada had also interacted with ChatGPT prior to committing acts of violence, prompting further debate about safeguards and reporting responsibilities.
OpenAI CEO Sam Altman has previously discussed the idea of “AI privilege,” suggesting conversations with AI systems could one day receive protections similar to those between individuals and professionals like doctors or lawyers.
The investigation is expected to test the legal boundaries of AI accountability as policymakers and law enforcement grapple with the rapidly evolving role of artificial intelligence in society.
