Anthropic's announcement of Claude Mythos Preview on April 7, 2026, has put building automation and operational technology (OT) security professionals on alert. The frontier AI model's autonomous vulnerability-discovery capabilities extend well beyond conventional IT infrastructure to the firmware, legacy protocols, and networked control systems that underpin commercial buildings.
Background
On April 7, Anthropic announced its latest general-purpose frontier AI model: Claude Mythos Preview. The release did not follow a standard commercial launch. Anthropic provided a preview to a small group of partner organizations for cybersecurity work as part of a new security initiative dubbed Project Glasswing, in which 12 partner organizations deploy the model for "defensive security work" and to secure critical software. Project Glasswing partners include Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks.
The move follows a documented pattern of AI-enabled threat escalation. Anthropic has reported that hacking groups, including those linked to the Chinese government, have attempted to exploit Claude in real-world cyberattacks. In one documented case, the company discovered that a Chinese state-sponsored group had been running a coordinated campaign using Claude Code to infiltrate roughly 30 organizations - including tech companies, financial institutions, and government agencies - before detection.
Readers concerned with building automation security should consult this portal's earlier reporting on secure-by-design standards in building automation, which established baseline context for OT-level cyber-physical risk.
Details
Independent government evaluation has confirmed a step-change in the model's offensive capability. In controlled evaluations where Mythos Preview was explicitly directed and given network access, the UK AI Security Institute (AISI) observed that it could execute multi-stage attacks on vulnerable networks and discover and exploit vulnerabilities autonomously - tasks that would take human professionals days. On expert-level capture-the-flag tasks - which no AI model could complete before April 2025 - Mythos Preview succeeded 73% of the time, according to AISI.
Critically for facility and OT security teams, the model's reach extends beyond patched enterprise software. While the cybersecurity community has spent years focused on application security and "top layer" problems, AI tools have begun exploiting vulnerabilities in forgotten firmware and routers whose manufacturers went out of business long ago. Tools like Mythos can relentlessly weaponize the massive technical debt of large organizations. Building management systems (BMS), HVAC controllers, access control platforms, and energy management gateways frequently run on exactly this class of legacy, unpatched firmware - making them acutely exposed to the autonomous reconnaissance and exploit-chaining Mythos has demonstrated.
The model identifies a zero-day weakness, then weaponizes and compounds it by linking it to other vulnerabilities and, if necessary, lingers undetected indefinitely. Through these "exploit chains," Mythos can execute a full system takeover. According to Anthropic, the prompt used to discover vulnerabilities essentially amounted to "Please find a security vulnerability in this program," and engineers with no formal security training were able to generate complete, working exploits.
A joint report from the Cloud Security Alliance (CSA), the SANS Institute, and OWASP concluded that organizations are "likely to be overwhelmed" in the near term by threat actors using AI to find and exploit vulnerabilities faster than defenders can patch them. CrowdStrike's 2026 Global Threat Report found an 89% year-over-year increase in attacks by adversaries using AI.
The governance gap is equally pressing. Anthropic's Responsible Scaling Policy does not address what happens when a model operates inside an enterprise with access to customer data, financial systems, and thousands of users deploying it without governance. When an AI agent connects to a CRM, queries a database, or triggers a workflow, the concern is not model safety - it is deployment governance. For smart building operators integrating AI-driven automation into HVAC scheduling, lighting orchestration, or occupancy analytics, this distinction defines a critical accountability gap that vendor-level controls alone cannot close.
Mechanisms are needed to ensure that responsibility does not rest solely on individual companies voluntarily following a responsible path. Options include scaling up responsible vulnerability disclosure and systematic testing of critical systems using the most advanced models. Less-resourced organizations need sufficient support for testing and patching their systems. Building sector operators, who typically lack the in-house security resources of technology multinationals, fall squarely within that under-resourced category.
Outlook
Future frontier models will be more capable still, making investment in cyber defense now vital. The AISI has indicated its next evaluations will use ranges simulating hardened environments, including active monitoring, endpoint detection, and real-time incident response. Mythos Preview's ability to write exploits autonomously means patch cycles must shorten. For building automation professionals, immediate priorities align with the fundamentals the AISI identified: regular application of security updates, robust access controls, security configuration, and comprehensive logging. Sector-wide cybersecurity standards - including IEC 62443 for industrial control systems and the forthcoming European Cyber Resilience Act - will need to explicitly address AI-augmented threat models to remain fit for purpose.
