Governments in the United States and Europe are advancing governance frameworks for AI embedded in operational technology (OT) environments after evaluations of Anthropic's Claude Mythos frontier model demonstrated that advanced AI can execute autonomous, multi-stage cyberattacks on infrastructure networks - a capability with direct implications for smart building operators managing AI-driven control systems.
Background
Anthropic released a preview of its new frontier model, Mythos, deploying it exclusively through Project Glasswing - a security initiative in which 12 partner organizations use the model for defensive cybersecurity work and to secure critical software. The release followed internal concern about the model's offensive potential. Anthropic claims that over the preceding weeks, Mythos identified "thousands of zero-day vulnerabilities, many of them critical," with many dating back one to two decades.
Independent evaluation confirmed the model's threat profile. The UK AI Security Institute (AISI) found that Claude Mythos Preview "represents a step up over previous frontier models in a landscape where cyber performance was already rapidly improving." In controlled evaluations where Mythos Preview was explicitly directed and given network access, it executed multi-stage attacks on vulnerable networks and discovered and exploited vulnerabilities autonomously - tasks that would take human professionals days. On expert-level capture-the-flag tasks - which no model could complete before April 2025 - Mythos Preview succeeded 73% of the time.
Shane Fry, CTO of RunSafe Security, noted that "vulnerability discovery is outpacing patching," warning that AI accelerates exploit discovery beyond realistic remediation rates, especially in complex OT environments.
Modern commercial properties rely on software platforms to manage everything from HVAC and lighting to elevators and access control. Many of these systems were designed in an era when connectivity was a feature rather than a liability. As a result, they often lack the security architecture standard in other industries, with some running on legacy operating systems and others connected to broader networks in ways that were never fully mapped or secured.1NSA, CISA Guidance Demands a Secure-by-Design Approach for AI in OT
Details
Regulators moved quickly to address AI risk in OT environments. On December 3, 2025, CISA, the NSA, the FBI, and several international cyber authorities released Principles for the Secure Integration of Artificial Intelligence in Operational Technology, a joint framework to help critical infrastructure operators deploy AI safely and responsibly. Developed by CISA, Australia's ACSC, and seven other national cybersecurity agencies, the publication marks a coordinated global step toward a common foundation for securing AI in critical infrastructure. Its core message: AI can enhance reliability and efficiency, but only when governed like any other critical control system.
The framework is structured around four principles covering: educating personnel on AI risks and secure development lifecycles; evaluating business cases and managing OT data security risks; implementing governance frameworks with continuous model testing; and embedding oversight and failsafe mechanisms into AI-enabled OT systems. The guidance advises operators to establish safe operating bounds, monitor models for drift or abnormal behavior, and validate outputs in simulated environments before redeployment. It specifically recommends anomaly detection, logging, and regular AI red-teaming.
On the supply chain side, the document introduces new expectations for how OT data is used in AI training. Concerns include data assurance and sovereignty, exposure of sensitive process data that can become statistically embedded in models beyond its normal retention period, and data poisoning - all treated as credible threats to model reliability.
In Europe, the EU AI Act entered into force on 1 August 2024 and will be fully applicable from 2 August 2026; obligations for general-purpose AI (GPAI) models became applicable on 2 August 2025. The Act sets a risk-based framework for AI governance and imposes requirements on high-risk AI systems, including transparency, bias detection, and human oversight. Under the Act, AI safety components in critical infrastructure - such as transport and building control - are classified as high-risk use cases where failures could endanger the life and health of citizens.
For building operators, the workforce dimension compounds the governance challenge. The SANS 2026 Cybersecurity Workforce Research Report identified a significant governance gap: while 54% of organizations report having AI security policies, only 38% provide any form of comprehensive training - a disconnect that matters acutely in industrial environments, where OT teams already operate with limited visibility and specialized tooling. The report identifies AI governance, risk, and compliance as the top required competencies, signaling a shift from traditional perimeter defense toward governance-heavy security models.
Outlook
AI-related cyberattacks are expected to dominate security operations throughout 2026. Threat actors are leveraging generative AI to orchestrate attacks at previously impossible speeds, while regulators, customers, and auditors increasingly expect provable security controls across the full AI lifecycle - from data ingestion through deployment, monitoring, and incident response. Building operators integrating AI into HVAC, access control, lighting, and fire-safety networks will face mounting pressure to align procurement with emerging AI risk standards, maintain auditable decision logs, and demonstrate cybersecurity maturity as insurance markets tighten coverage terms for AI-enabled OT environments. The convergence of IT and OT and the integration of AI into OT have required a fundamental rethinking of industrial security by vendors and enterprises alike, according to IoT Analytics' OT Cybersecurity Insights Report 2026.
Related reading: Integrated Building Security Standards Gain Momentum | Security-by-Design Surge in Building Automation Amid Rising Cyber-Physical Risks
