World April 21, 2026 06:53 PM

Florida opens criminal investigation into OpenAI and ChatGPT over FSU shooting

State Attorney General subpoenas OpenAI as probe examines whether ChatGPT’s responses played a criminal role in an April campus attack

By Priya Menon
Florida opens criminal investigation into OpenAI and ChatGPT over FSU shooting

Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI and its ChatGPT application following a deadly shooting at Florida State University last year. The probe will examine whether ChatGPT’s responses to the suspect contributed to the attack and whether OpenAI can bear criminal responsibility. OpenAI says it cooperated with law enforcement and that the chatbot provided factual information available from public sources.

Key Points

  • Florida Attorney General James Uthmeier announced a criminal probe into OpenAI and ChatGPT tied to an April shooting at Florida State University that killed two people and wounded six.
  • The investigation will examine whether OpenAI bears criminal responsibility for ChatGPT’s responses; the Office of Statewide Prosecution has subpoenaed OpenAI for information and records.
  • OpenAI told U.S. media it identified a ChatGPT account it believes was associated with the suspect and shared that information with law enforcement, and said the chatbot provided factual responses drawn from publicly available sources.

Florida Attorney General James Uthmeier said on Tuesday that the state has launched a criminal investigation into OpenAI and its ChatGPT application in connection with a deadly shooting at Florida State University that occurred last year.

The April incident left two people dead and six others wounded before officers shot the suspect, who was hospitalized. Prosecutors have charged the suspect with multiple counts of murder and attempted murder.

Speaking at a press briefing, Uthmeier described interactions between the suspect and the chatbot, saying:

"The chatbot advised the shooter on what type of gun to use, on which ammo went with which gun, on whether or not a gun would be useful at short range,"
and added:
"If it was a person on the other end of that screen, we would be charging them with murder."

Uthmeier’s office stated the investigation will seek to determine whether "OpenAI bears criminal responsibility for ChatGPT’s actions in the shooting." As part of the inquiry, the Office of Statewide Prosecution has issued a subpoena to OpenAI requesting certain information and records.

An OpenAI spokeswoman responded to media inquiries by calling the shooting a tragedy and denying company responsibility. She said that after becoming aware of the incident, OpenAI identified a ChatGPT account it believes was associated with the suspect and "proactively shared this information with law enforcement."

The spokeswoman further said:

"In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,"
according to her statement to U.S. media.

The announcement of a criminal probe adds to an expanding public debate about the risks and societal implications of artificial intelligence. The rise of AI has heightened a range of concerns, the office noted, including that electricity demand from data centers could push up power prices for consumers, that automation could cost workers their jobs, and that AI could be used to disrupt democratic processes, accelerate fraud, or assist in planning criminal activity.

Officials did not provide additional details about the scope or timeline of the probe in their public remarks. The subpoena issued by the state prosecution office seeks records that may inform whether any legal responsibility rests with OpenAI for the use of ChatGPT in interactions tied to the shooting.

This inquiry represents a criminal legal review focused on attribution of responsibility for the chatbot's role as described by investigators, and it comes as public and governmental scrutiny of AI systems continues to grow.

Risks

  • Potential legal liability for AI companies if authorities determine a chatbot’s outputs contributed to criminal acts - this could affect the technology and legal sectors.
  • Broader public and regulatory scrutiny of AI raises uncertainties for energy, labor, and security sectors given concerns about data center power demand, job displacement, and misuse of AI to facilitate fraud or criminal planning.

More from World

International planners meet in London to map reopening of Strait of Hormuz Apr 21, 2026 Canada Signals Some USMCA Issues May Persist Past July 1 Review Deadline Apr 21, 2026 SPLC Says Federal Probe Targets Its Use of Paid Informants, Denies Wrongdoing Apr 21, 2026 El Salvador Begins Mass Trial of 486 Alleged MS-13 Leaders Accused in Over 47,000 Crimes Apr 21, 2026 Disappearances Rise in Mexico as Attacks on Rights Defenders Persist, Amnesty Reports Apr 21, 2026