arrow_backElectronics Insider

Regulators Issue AI Governance Rules for Building Automation OT Systems

Nine international agencies issue AI-OT governance guidance requiring zero-trust controls, human oversight, and auditable AI in smart building systems.

BREAKING
Regulators Issue AI Governance Rules for Building Automation OT Systems

A coalition of nine international cybersecurity agencies has published binding-equivalent guidance requiring critical infrastructure operators - including smart building managers - to embed zero-trust controls, human oversight, and auditable AI decision-making into operational technology (OT) deployments. Released on December 3, 2025, the directive represents the most comprehensive coordinated regulatory action to date targeting AI-enabled OT environments.

Background

The guidance, "Principles for the Secure Integration of Artificial Intelligence in Operational Technology," was issued on December 3, 2025, by CISA and nine partner agencies spanning the U.S., U.K., EU, Canada, Australia, and New Zealand. It arrives as AI-driven control logic increasingly replaces or augments traditional deterministic algorithms in building management systems (BMS), HVAC platforms, access control layers, and energy optimization engines.

Unlike IT systems, OT environments are built around determinism, real-time constraints, and strict safety margins - properties that regulators now argue are fundamentally threatened when probabilistic AI models are deployed without structured governance. The joint guidance's risk tables explicitly link AI issues - including model drift, lack of explainability, alarm noise, and interoperability problems - to increased recovery time and reduced system availability. The agencies warn that AI introduces complexity, new attack surfaces, and interoperability challenges that can hinder troubleshooting and recovery.

The guidance complements a widening international regulatory stack. The EU AI Act entered into force on August 1, 2024, with governance rules and general-purpose AI model obligations applicable from August 2, 2025. Rules for high-risk AI systems embedded in regulated products carry an extended transition period until August 2, 2027. Non-compliance can trigger fines of up to €35 million or 7% of global turnover, depending on the infringement and company size.

Details

The joint document outlines four key principles for owners and operators to realize the benefits of AI integration in OT systems while reducing risk, with a specific focus on machine learning, large language model-based AI, and AI agents due to the complex security considerations they pose.

On governance structure, the framework specifies that governance should involve leadership, OT and IT subject-matter experts, cybersecurity teams, and relevant vendors, with clear roles across the AI lifecycle. It calls for strengthened data governance through access controls, encryption, and behavioral analytics, along with regular audits to ensure models operate as intended.

On testing and interoperability, the guidance directs operators to test AI systems as rigorously as any new OT component, verifying latency, interoperability, and effects on safety boundaries. It recommends limiting active AI control of OT assets without a human in the loop. The framework cross-references NIST's AI Risk Management Framework (AI RMF) and ETSI's Securing Artificial Intelligence (SAI) standards as complementary frameworks and advises adopting a secure AI development lifecycle integrating continuous validation and threat modeling.

On supply chain transparency, as AI systems become more deeply embedded in OT environments, the regulatory landscape will increasingly hinge on how operational data is governed and how vendor responsibilities are structured. AI systems expand the volume, sensitivity, and retention of OT data, elevating regulatory exposure and cybersecurity obligations. Cross-border data access by AI vendors may introduce jurisdictional complications, particularly in regions where foreign laws could mandate disclosure of training or operational datasets.

For building operators specifically, the zero-trust requirement carries direct procurement implications. The NSA, CISA, and partner agencies confirmed that as AI becomes embedded in industrial control systems, critical infrastructure, and automation pipelines, organizations must establish strong identity verification and continuous trust - or risk introducing vulnerabilities at unprecedented scale. The guidance also states that operators should account for AI model drift, lack of explainability, operator cognitive load, and interoperability risks, all of which can increase downtime and complicate recovery.

Marcus Fowler, CEO of Darktrace Federal, called it "encouraging to see a strong focus on behavioral analytics, anomaly detection, and the establishment of safe operating bounds that can identify AI drift, model changes, or emerging security risks before they impact operations."

Outlook

Organizations may need to update governance, data practices, and incident-response plans as regulators increase scrutiny of AI1Zero trust regulation and framework | Omnissa - a process that building operators, MEP consultants, and system integrators should begin mapping to current BMS procurement and specification cycles. Agentic AI systems - those that act rather than merely advise - are expected to stress-test "human oversight" rules as they gain adoption across automation portfolios in 2026. Executives managing mixed-technology portfolios are advised to prioritize risk-based roadmaps, audit vendor AI disclosures against the new joint framework, and engage with standards bodies developing interoperability schemas for AI-enhanced building controls - steps that parallel but extend beyond existing NIS2 and NIST CSF 2.0 compliance obligations already active across the sector.