Home/Insights/Navigating the 2026 ASIC RG 265 Framework for Agentic AI: A Regulatory Compliance Checklist for Market Integrity
Regulation13 min readPublished 2026-02-04Updated 2026-02-04

Navigating the 2026 ASIC RG 265 Framework for Agentic AI: A Regulatory Compliance Checklist for Market Integrity

The structural evolution of the Australian financial ecosystem in 2026 is defined by the transition from automated order processing to the deployment of fully autonomous agentic AI within capital markets.

The structural evolution of the Australian financial ecosystem in 2026 is defined by the transition from automated order processing to the deployment of fully autonomous "agentic AI" within capital markets. In response to this shift, the Australian Securities and Investments Commission (ASIC) has identified the "regulatory perimeter" gap as a critical systemic risk, specifically highlighting that while agentic AI offers efficiency in price discovery, its capability to independently plan and act compounds the risk of market volatility and "flash crashes" if governance is immature. To address these technological fragilities, ASIC released a comprehensive suite of updates in December 2025 to Regulatory Guide 265 (RG 265), governing market integrity rules for participants of securities markets, and Regulatory Guide 266 (RG 266) for futures markets. These updates are part of the third stage of ASIC’s work to simplify and clarify the Resilience Rules set out in Chapters 8A and 8B of the ASIC Market Integrity Rules (MIRs), requiring market participants to prove that their autonomous agents are operating within a framework of technological and operational resilience that aligns with international best practices from IOSCO.

Central to the 2026 compliance landscape is the modernized, technology-neutral definition of "Trading Algorithms" and "Trading Systems" introduced via Consultation Paper 386 (CP 386). ASIC now defines a Trading Algorithm as any computer algorithm that automatically determines substantive order parameters—such as timing, price, or quantity—with limited or no human intervention. This definition intentionally captures the current practice of agentic AI, ensuring that these autonomous actors are subject to the same oversight as traditional algorithmic strategies. Under the revised MIRs, market participants must maintain a rigorous lifecycle for these agents, beginning with initial certification and testing before deployment, followed by mandatory reviews whenever a material change occurs in the model’s weights or policy logic. To meet these standards, firms are increasingly utilizing Trusted Execution Environments (TEEs) to provide a hardware-level "Digital Helmet," which cryptographically proves to auditors that the version of the agent running in production is the exact, untampered version that received compliance certification.

The "kill switch" requirement has emerged as the most critical technical mandate for autonomous systems in 2026. ASIC rules now explicitly require participants to have in-place controls that enable the immediate suspension, limitation, or prohibition of any Trading Algorithm that displays aberrant behavior or threatens market integrity. This mandate is supplemented by the requirement for real-time monitoring of all trading messages, moving beyond the previous "close to real-time" standard. For firms deploying agentic AI, this necessitates a move from post-hoc reporting to a deterministic compliance model. By anchoring risk-based "position collars" and "velocity logic" within silicon-enforced enclaves, institutions can ensure that if an agent attempts a trade that violates a pre-defined policy—such as during a period of extreme "agentic drift"—the enclave simply refuses to sign the transaction, stopping the violation before it ever reaches the matching engine. This technological hardening allows firms to satisfy ASIC's 2026 priorities regarding the "variable maturity" of AI governance across the sector.

From a legal and fiduciary perspective, the 2026 RG 265 checklist reinforces the non-delegable nature of directors' duties under the Corporations Act 2001 (Cth). ASIC Chair Joe Longo has emphasized that while AI can assist in meeting duties, it cannot be used to outsource or abdicate them. Section 180 requires directors to exercise a degree of care and diligence that a reasonable person would exercise, which in the context of autonomous trading translates to an obligation to understand and interrogate the algorithms they deploy. Legal precedents such as ASIC v Healey (the "Centro case") establish that directors cannot claim ignorance of complex systems as a defense; they must satisfy themselves as to the reliability and competence of the AI "delegate" and ensure that meaningful human oversight remains in the loop for all material decisions. To discharge these duties, the 2026 compliance checklist requires firms to maintain clear records of AI systems, conduct regular conformity assessments, and establish robust crisis management frameworks that address the specific vulnerabilities of third-party AI service providers and the risk of "market monoculture" caused by synchronized algorithmic de-risking.

Ultimately, the path to 2026 ASIC compliance requires a holistic integration of hardware-level security, real-time monitoring, and disciplined corporate governance. The "Year of Accountability" demands that financial institutions treat non-human identities (NHIs) and autonomous agents as first-class citizens in their risk frameworks, ensuring that every action taken at machine speed is cryptographically tied to a verifiable identity and an authorized policy. Firms that fail to adopt these enhanced resilience measures risk not only regulatory penalties and "stop orders" on their products but also significant reputational damage as regulators intensify their scrutiny of "AI-washing" and misleading conduct in the digital asset and fintech sectors. By late 2026, the standard for market participation will be defined by "compliance by construction," where the ability to prove controllability through a TEE-backed "Digital Helmet" becomes the prerequisite for maintaining trust in Australia's increasingly autonomous financial ecosystem.

Francesco Tomatis

CEO & Founder, Kuneo

Read our full guide on AI Governance

This article is for informational purposes only and does not constitute legal or financial advice.