OpenAI has begun providing a select group of users with access to a new AI model aimed at locating and remediating software security flaws. The San Francisco-based company initiated the staged deployment of GPT-5.4-Cyber on Tuesday as part of an effort to target defensive cybersecurity use cases.
In a statement, OpenAI said: "We are fine-tuning our models specifically to enable defensive cybersecurity use cases, starting today with a variant of GPT-5.4 trained to be cyber-permissive: GPT-5.4-Cyber." The company made the model available to participants in its Trusted Access for Cyber program, which it launched in February.
The Trusted Access for Cyber initiative is intended to let cybersecurity professionals probe OpenAI's most advanced systems with fewer constraints, giving vetted testers the ability to evaluate capabilities for finding and fixing vulnerabilities.
The release comes only one week after Anthropic PBC introduced a security-focused tool called Mythos in a limited rollout. That competing launch has generated notable concern among financial institutions and government officials. According to reporting from Bloomberg, U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell raised warnings about Mythos with Wall Street leaders during a meeting last week.
OpenAI framed the dual developments in the context of long-standing weaknesses in digital infrastructure, noting that "Digital infrastructure has already been vulnerable for years, before advanced AI even came along," and adding that "threat actors are experimenting with novel AI-driven approaches."
Looking ahead, OpenAI said it plans to expand the program from the initial cohort of several hundred testers to thousands of verified defenders in the coming weeks. The company emphasized a dual goal of broadening access while maintaining strict identity checks, describing its commitment as "democratized access" coupled with rigorous identity verification for users of these advanced, potentially offensive-capable tools.
For cybersecurity teams and market participants, the near-term implications are focused on how defensive AI tools are tested, verified and governed as they scale. OpenAI's staged approach seeks to balance wider participation with verification processes, but the company did not provide additional detail on how verification will be sustained as the user base grows.
Context and market considerations
The close timing of the two releases has put AI-enabled security tooling at the center of conversations among firms that run critical digital infrastructure and the regulators that oversee financial stability. The limited-access rollouts aim to let defenders assess capabilities in controlled environments before broader distribution.