“I Did Not Know” Is Not a Defence: Why ASIC Has Already Put AI Inside Directors’ Duties
Abstract
This paper argues that the "I did not know" defence has effectively closed for Australian directors whose organisations deploy artificial intelligence. In two 2025 speeches, the Chair of the Australian Securities and Investments Commission (ASIC) confirmed that AI sits within existing director duties under the Corporations Act 2001 (Cth) and that regulators will use current powers "more boldly and imaginatively" rather than await AI-specific legislation. Australian Securities and Investments Commission v RI Advice Group Pty Ltd [2022] FCA 496 supplies the doctrinal template: a licensee's failure to implement "reasonably appropriate" controls for foreseeable technology risk breached section 912A, with stepping-stones potential to section 180. The paper maps this reasoning onto AI, proposes a normative governance floor drawing on NIST AI RMF, ISO/IEC 42001, and AICD/HTI guidance, and situates the analysis within Australia's technology-neutral regulatory regime and emerging international standards. Foreseeability and industry baselines have now sufficiently crystallised to expose directors who cannot evidence governance.
Executive Summary
Executive Summary
In the space of four months, the Chair of the Australian Securities and Investments Commission (ASIC) has twice told directors, in clear terms, that artificial intelligence (AI) now sits inside their existing duties. At the Australian Institute of Company Directors (AICD) Australian Governance Summit in March 2025, Joe Longo reminded boards that “the times they are a changin’, but directors’ duties aren’t”, emphasising that directors must still act with due care and diligence by understanding their business and its risks. In July 2025, addressing the Australian Banking Association, he went further: “as companies increasingly deploy AI, that use of AI is one of a range of issues that directors must pay attention to, as part of their existing duties.”
At the same time, ASIC has demonstrated its willingness to litigate foreseeable technology risk through existing provisions, most notably in Australian Securities and Investments Commission v RI Advice Group Pty Ltd FCA 496, where the Federal Court held that a licensee’s failure to implement “reasonably appropriate” cyber controls breached its general obligations under section 912A of the Corporations Act. The logic is readily transportable to AI: risk is foreseeable; zero risk is impossible; but regulators now expect structured, documented and tested controls.
This paper argues that the “we did not know” defence for AI has already closed as a matter of regulatory signalling and legal architecture. First, it sets out the practical framing through a recent Australian cautionary tale: the 2025 Deloitte/Department of Employment and Workplace Relations (DEWR) refund over a generative AI assisted report, which illustrates that AI failures can be prosecuted under existing standards and contracts without any new AI specific statute. Second, it analyses ASIC’s public statements and the RI Advice precedent to show that a template for AI enforcement is already available. Third, it outlines what “reasonably appropriate” governance looks like for AI in light of the AICD/Human Technology Institute guidance, the National Institute of Standards and Technology (NIST) AI Risk Management Framework, ISO/IEC 42001 and emerging international norms.
The broader regulatory context is then mapped: a technology neutral domestic regime (Corporations Act, Privacy Act, Australian Consumer Law, APRA CPS 230, OAIC guidance) layered with extraterritorial pressures from the EU AI Act and US enforcement trends. The conclusion is straightforward. AI adoption is no longer optional; but neither is its governance. The directors who will survive the first Australian AI governance test case will not be those who can say “I did not know”, but those who can show, with evidence, what they knew, what they did about it, and when.
1. The framing: “I did not know” is no longer available¹
A useful starting point is not a hypothetical future catastrophe, but an actual, recent failure in a familiar setting: a major professional services firm delivering a written report to a Commonwealth department.
In October 2025, Deloitte Australia agreed to partially refund a 440,000 dollar contract with the Department of Employment and Workplace Relations after a 237 page report was found to contain fabricated academic references and a fictitious quote attributed to a Federal Court judge in the Robodebt litigation. Subsequent reporting disclosed that generative AI (Microsoft’s Azure OpenAI) had been used in preparing the report, and that the use of AI was only acknowledged in a revised version after external experts identified the fabrications.
Nothing about that failure required a new AI statute to make it actionable. Existing professional standards (including duties of competence and diligence), contractual obligations around deliverable quality, and the background of the Robodebt Royal Commission were sufficient to make the incident reputationally and commercially significant. The episode was particularly striking because the report’s subject matter, automated decision making in welfare, sat squarely within the terrain that the Robodebt Commission had just identified as a systematic governance failure rather than a purely technical one. It illustrated, in one case, four key points:
- AI risk is no longer exotic or speculative; it arises in ordinary professional services work undertaken by mainstream firms for public authorities.
- The failure mode (hallucinated citations and mis attributed judicial language) was foreseeable given well publicised limitations of large language models.
- Existing law and contract were already sufficient to allocate responsibility; there was no “AI gap” in liability.
- The core governance questions, who approved the use of AI, what quality assurance processes were in place, who signed off on the final report, are the same questions directors are expected to ask of management in any other high risk context.
This sits against a broader empirical backdrop. Research by UpGuard in 2025 found that more than 80 percent of workers, and around 68 percent of security leaders, report using unauthorised AI tools at work, with fewer than one in five respondents saying they rely exclusively on company approved tools. Lenovo’s Work Reborn research series reported that over 70 percent of employees use AI weekly and that a significant fraction do so outside formal IT governance. A KPMG/University of Melbourne global study found that 48 percent of Australian employees admit to using AI in ways that contravene company policies, and that only around 30 percent say their organisation has a generative AI policy at all.
In that environment, where the weight of evidence suggests a substantial proportion of enterprise AI use occurs outside formal governance channels, ignorance of AI deployments is increasingly likely to be treated as a governance failure rather than a defence. The Robodebt Royal Commission reinforces this point in the public sector context, framing the unlawful debt recovery scheme as a failure of oversight, legal analysis and ethical scrutiny, not as an unforeseeable technological misstep.
The central claim of this paper is that, given the state of public guidance and legal development by mid 2026, directors cannot credibly say that AI risk took them by surprise. The relevant question is no longer “did you know AI was a risk?” but “what reasonably appropriate steps did you take to identify, govern and monitor that risk once it was clear?”
What this paper argues
What this paper argues
- ASIC has now twice stated, in public speeches, that AI sits within existing director duties and will be supervised using existing powers.
- ASIC v RI Advice provides the legal architecture for AI-related director liability under the Corporations Act, without any need for an AI-specific statute.
- Directors who cannot evidence “reasonably appropriate” AI governance, inventories, clear accountability, assurance, and incident response testing, are exposed under multiple Australian regimes at once, regardless of whether an AI Act is enacted.
¹ Methodology. This paper is based on doctrinal analysis of publicly available sources current to May 2026, including ASIC speeches and enforcement actions, Australian and international judgments, prudential standards, and recognised AI governance frameworks (NIST AI RMF, NIST AI 600 1, ISO/IEC 42001, AICD/HTI guidance). It does not present original empirical research. All interpretations of regulatory position and legal risk are the author’s unless otherwise indicated.
2. What ASIC has actually said
The doctrinal weight behind the paper’s title comes from ASIC’s own words. In two set piece speeches in 2025, Chair Joe Longo effectively placed AI within the perimeter of existing director duties, not as a hypothetical but as a live regulatory expectation.
2.1 AICD Australian Governance Summit (March 2025)
2.1 AICD Australian Governance Summit (March 2025)
2.1 AICD Australian Governance Summit (March 2025)
2.1 AICD Australian Governance Summit (March 2025)
2.3 Enforcement posture and recent cases
2.3 Enforcement posture and recent cases
ASIC’s willingness to litigate technology related risk through existing provisions is already visible in its enforcement docket. ASIC v RI Advice has become the canonical example, with the Federal Court holding that the licensee’s failure to implement “adequate cyber security risk management systems” breached section 912A(1)(a) and (h), even though zero cyber risk was impossible. ASIC has characterised the case as a warning to directors that they must ensure their organisation’s risk management framework adequately addresses cyber security risk, or risk regulatory sanction.
Subsequent actions, such as proceedings relating to FIIG Securities, reinforce the idea that ASIC will pursue firms that fail to manage foreseeable operational and conduct risks arising from technology and complex products. When read alongside Longo’s 2025 speeches, these cases indicate a coherent enforcement narrative: sophisticated entities are expected to identify and manage technology enabled risks, first cyber, now AI, using reasonably appropriate systems; failure to do so exposes both the firm and, potentially, its directors.
Against that backdrop, the question in any future AI related case will not be whether ASIC had authority to act, but whether the risk was foreseeable and whether the board can demonstrate that it insisted on “reasonably appropriate” AI governance once those risks were known.
3. The legal precedent: RI Advice as the template
The doctrinal core for transposing these expectations to AI lies in Australian Securities and Investments Commission v RI Advice Group Pty Ltd FCA 496. Although the case concerned cyber security, its reasoning is technology agnostic and readily applicable to AI.
3.1 Facts and findings
3.1 Facts and findings
RI Advice was an Australian Financial Services Licensee whose authorised representatives had experienced a series of cyber incidents, including ransomware attacks and unauthorised access to client information. ASIC alleged that RI Advice had failed to:
- Have adequate risk management systems in place, contrary to its obligations under section 912A(1)(a) and (h) of the Corporations Act.
- Ensure those systems were effectively implemented and maintained over time.
Justice Rofe accepted that it was “not possible to reduce cybersecurity risk to zero” but held that RI Advice was required to have in place “adequate documentation and controls” capable of materially reducing cyber risk to an acceptable level, and that its failure to do so breached section 912A(1)(a) and (h). RI Advice.
The case settled by consent orders, but the reasons have been widely treated as authoritative guidance on regulators’ and courts’ expectations for technology risk management, and as a signal that ASIC will pursue entities whose cyber risk frameworks are materially deficient.
3.2 Stepping stones to director liability
3.2 Stepping stones to director liability
Although RI Advice did not itself involve director defendants, it has been extensively analysed by law firms and academic commentators as a stepping stones case in waiting: where a company is found to have breached its licence obligations by failing to manage foreseeable cyber risk, ASIC can subsequently allege that directors breached section 180 by failing to ensure that reasonably appropriate systems were in place.
The stepping stones strategy more generally has been described by Langford, Ramsay and others as an enforcement approach in which a proven or admitted contravention by the company is treated as the “first stone”, from which ASIC then seeks to establish that directors contravened their duty of care by causing or failing to prevent that contravention. On this view, RI Advice is best understood as a cyber risk stepping stones case that demonstrates how operational risk failures can ground later personal claims against directors, a logic that can extend, in principle, to other forms of operational risk, including AI.
Key elements of that logic include:
- Foreseeability: cyber risk was well understood, widely publicised and directly relevant to the licensee’s operations.
- Standards: industry guidance, regulator expectations and basic risk management principles had collectively established a baseline for what “adequate” controls look like.
- Governance expectations: directors were expected to ensure that frameworks existed, were resourced, were periodically reviewed, and were adjusted in response to incidents and changing risk profiles.
Once those elements are accepted, the move to director liability becomes a question of whether a reasonable director in the circumstances would have known about the risk and insisted on appropriate systems.
3.3 Applying RI Advice to AI
3.3 Applying RI Advice to AI
Every component of the RI Advice reasoning can now be mapped onto AI:
- Foreseeability: AI related risks are widely discussed in regulator speeches, director guidance and major surveys. ASIC’s own statements, the AICD/HTI Director’s Guide to AI Governance and the National AI Plan have all highlighted AI as a key governance issue. Internationally, the EU AI Act, US enforcement actions and high profile cases such as Mobley v Workday (AI hiring bias) and Moffatt v Air Canada (chatbot misrepresentation) make it increasingly implausible to argue that AI risks were unknown.
- Standards: We now have detailed, publicly available frameworks that specify what good AI governance looks like. NIST’s AI Risk Management Framework (AI RMF 1.0) and its Generative AI Profile (AI 600 1) provide structured guidance on governing, mapping, measuring and managing AI risks. ISO/IEC 42001 offers an auditable management systems standard for AI analogous to ISO 27001 for information security. AICD/HTI’s eight elements of safe and responsible AI governance translate these concepts into an Australian board context.
- Governance expectations: ASIC has, in effect, already told directors that AI sits within their existing duties and that it will use its current powers “more boldly and more imaginatively” in response. APRA’s CPS 230 Prudential Standard on Operational Risk Management, effective from July 2025, requires regulated entities to identify and manage operational risks (including technology and data risks), maintain critical operations within tolerance levels and manage service provider risks, all of which plainly encompass AI enabled processes and outsourced AI providers.
Technology neutral provisions such as sections 912A and 180 are under specified, and AI systems’ opacity and complexity make it genuinely difficult, in some settings, to say what a reasonable director “ought to have known” about a particular model’s behaviour or failure mode. Reasonable minds can differ on how far RI Advice should be stretched beyond cyber, and whether a court would characterise particular AI harms as arising from a failure to implement “reasonably appropriate” controls rather than from irreducible uncertainty.
On this foundation, it is easy to imagine an AI analogue to RI Advice . A firm deploys AI in credit decisioning, claims management, pricing or advice; a series of incidents reveal biased outcomes, hallucinated outputs or operational failures; ASIC alleges that the firm failed to implement reasonably appropriate AI governance controls despite clear public guidance; and, in due course, directors face section 180 claims for failing to ensure those systems were in place.
To be clear, there are limits to this analogy. Technology neutral provisions such as sections 912A and 180 are under specified, and AI systems’ opacity and complexity make it genuinely difficult, in some settings, to say what a reasonable director “ought to have known” about a particular model’s behaviour or failure mode. Reasonable minds can differ on how far RI Advice should be stretched beyond cyber, and whether a court would be prepared to characterise particular AI harms as arising from a failure to implement “reasonably appropriate” controls rather than from irreducible uncertainty. The case is best seen as establishing a trajectory for operational risk enforcement, not as a complete code for AI.
Importantly, however, the enforcement challenge in a future AI case would not be the absence of an AI specific statute, but the court’s assessment of whether AI risks were foreseeable and whether industry standards had crystallised sufficiently to define a baseline. By mid 2026, both conditions are, at least for larger and more sophisticated entities, increasingly likely to be satisfied.
4. What “reasonably appropriate” looks like for AI
This section proposes, rather than describes, a minimum governance standard that courts may in future regard as “reasonably appropriate” for AI in firms with material exposure, drawing on existing guidance from AICD/HTI, NIST AI RMF, NIST AI 600 1, ISO/IEC 42001 and leading law firm toolkits. It is a normative framework: an attempt to articulate a defensible floor below which a director would struggle to argue that they had discharged their duties, particularly in a regulated or high impact environment.
A starting point is simple visibility. Multiple studies now confirm that “shadow AI”, unsanctioned or uncontrolled use of AI tools by employees, is pervasive. UpGuard’s 2025 State of Shadow AI report found that around 80 percent of workers and nearly 90 percent of security leaders use unapproved AI tools at work, with fewer than 20 percent saying they use only company approved tools. Lenovo’s 2026 Work Reborn research series reported that more than 70 percent of employees use AI weekly, with up to one third doing so outside formal IT governance. A KPMG/University of Melbourne global study found that 48 percent of Australian employees admit using AI in ways that contravene company policies, and that only about 30 percent report the existence of a generative AI policy in their organisation.
4.1 An AI register and risk classification
4.1 An AI register and risk classification
Against that backdrop, a basic AI inventory or register, capturing all AI systems in use, including embedded AI in third party products, is the minimum precondition for any meaningful governance. A credible register would include, at a minimum:
- System name and owner.
- Purpose and high level description.
- Risk classification (for example, high/medium/low based on impact and regulatory exposure).
- Key data sources, model type and version (where known).
- Last assurance or testing date.
The now superseded Voluntary AI Safety Standard and associated “guardrails” had already pointed in this direction (for example, Guardrail 7 on AI inventories); ISO/IEC 42001 generalises the expectation into a continuous management systems requirement.
4.2 Named executive and board level ownership
4.2 Named executive and board level ownership
Clarity of accountability is central to both ASIC’s messaging and governance best practice. AICD/HTI’s guide recommends explicit allocation of AI responsibilities across management and the board, warning that many organisations still treat AI as a diffuse IT issue rather than a strategic risk. In practice, that implies:
- A named executive owner for AI risk and governance (for example, CRO, CAIO or a member of the executive committee).
- A designated board committee (often risk, audit or technology) with AI explicitly within its charter.
- Clear reporting lines from operational AI teams to the board, including escalation triggers.
In light of Longo’s statements, it is difficult to see how a board approving significant AI investment could discharge its duties without having, at minimum, asked: “Who is accountable for this portfolio of AI risks, and which committee is responsible for oversight?”
4.3 Independent assurance over high risk AI
4.3 Independent assurance over high risk AI
NIST AI RMF and AI 600 1 both emphasise the need for continuous testing, monitoring and independent evaluation of AI systems, particularly where they affect safety, fundamental rights or financial outcomes. ISO/IEC 42001 embeds this into an assurance cycle, requiring internal audits and management reviews of AI management systems.
From a governance perspective, it is no longer tenable for the same team that builds an AI system to be solely responsible for its validation and monitoring. “Reasonably appropriate” controls for high risk AI (for example, in credit, claims, pricing, discrimination sensitive decisions or safety critical operations) would likely include:
- Periodic independent reviews by internal audit or qualified external assessors.
- Documented test plans and results, including fairness, robustness and drift tests.
- Clear remediation plans and follow up reporting to the board.
Grant Thornton’s 2026 AI Impact Survey found that while three in four boards have approved major AI investments, only 20 percent of organisations have a tested AI incident response plan, illustrating a wide gap between deployment and assurance.
4.4 Documented objective functions and constraints
4.4 Documented objective functions and constraints
Many of the most consequential AI failures have their roots not in coding errors, but in what the system was designed to optimise. Obermeyer et al. famously showed that a widely used health management algorithm in the United States optimised for healthcare cost as a proxy for need, resulting in significantly fewer Black patients being flagged for extra care despite similar levels of illness. In Australia, the Robodebt scheme optimised for throughput and debt recovery using income averaging proxies, without lawful authority or a robust human override.
For directors, the key question is: “What is this AI system optimising for, and what is it explicitly not allowed to trade off?” A reasonably appropriate standard for high risk systems would include:
- Written articulation of the objective function (for example, reduce loss, increase retention, triage cases) and the constraints (for example, no unlawful discrimination, no debts raised without lawful basis).
- Alignment of those objectives and constraints with risk appetite statements, conduct expectations and regulatory obligations.
- Governance mechanisms for changing objectives and constraints over time, with appropriate approvals.
Without this, boards risk presiding over optimisation processes that deliver short term performance gains at the expense of legality, fairness or trust.
4.5 AI incident response and board level metrics
4.5 AI incident response and board level metrics
Finally, AI incidents should be treated as a specific class of operational and conduct incident within existing frameworks. APRA’s CPS 230 requires regulated entities to maintain business continuity and manage operational risks, including technology and service provider risks, through scenario analysis, incident reporting and remediation. OAIC’s 2024 guidance on AI and privacy similarly expects organisations to identify high risk AI activities, conduct privacy impact assessments and embed “privacy by design” into AI development and deployment.
A baseline for “reasonably appropriate” AI incident management would therefore include:
- A documented AI incident response plan, integrated into broader incident and crisis management frameworks.
- Scenario exercises simulating AI failures (for example, biased hiring, erroneous pricing, hallucinated content) with board participation.
- A small dashboard of AI control metrics in regular board reports: model drift alerts, override rates, bias test results, incident counts, and status of incident response testing.
Collectively, these elements constitute a defensible floor. A board that cannot point to any of them in a high AI exposure business would struggle to argue that its governance of foreseeable AI risk was “reasonably appropriate”.
5. The broader regulatory context: AI in many laws at once
One of the more dangerous misconceptions in Australian boardrooms is that, because the Commonwealth has not enacted a horizontal AI Act, exposure is somehow reduced. In reality, the December 2025 National AI Plan explicitly abandoned earlier proposals for mandatory AI guardrails in favour of a “technology neutral” approach that relies on existing laws.
From a director’s perspective, this increases rather than decreases complexity. AI now sits at the intersection of multiple regimes where duties already exist:
- Corporations Act: sections 180–184 (directors’ duties) and section 912A (AFS licensee obligations) provide the backbone for governance related enforcement. RI Advice is already the canonical example of technology risk enforcement through section 912A, with clear stepping stones potential to section 180.
- ASIC Act and Australian Consumer Law: misleading or deceptive conduct provisions apply to AI related representations, including “AI washing”, exaggerated or false claims about AI capabilities. US SEC enforcement actions against Delphia and Global Predictions for AI related misrepresentations illustrate how existing marketing and compliance rules can be applied without AI specific law.
- Privacy Act: 2024 reforms and OAIC guidance on AI emphasise transparency, fairness and accountability in automated decision making, particularly where decisions have legal or similarly significant effects. Entities using AI in high impact contexts are expected to conduct privacy impact assessments and explain how personal information is used and protected.
- APRA CPS 230: as noted, APRA’s new operational risk standard, effective from July 2025, requires entities to manage operational risks (including technology and data risks), ensure continuity of critical operations, and manage service provider risks through formal agreements, monitoring and severe but plausible scenario testing. Any AI system supporting payments, claims, trading or customer critical processes is, by definition, in scope.
- Anti discrimination and employment law: while no Australian AI hiring case has yet reached the courts, US developments such as Mobley v Workday, in which an AI hiring class action has survived key procedural challenges, and the EEOC’s 2023 AI hiring settlement in iTutorGroup demonstrate that regulators treat AI mediated discrimination as within existing statues, not outside them.
The net effect is that AI risk is diffused across multiple, already active enforcement pathways. The decision not to legislate AI specifically does not reduce directors’ exposure; it simply means that compliance failures can trigger several regimes at once, with multiple regulators (ASIC, APRA, OAIC, ACCC, AHRC) involved.
6. International context as accelerant
Even where Australian entities have limited direct international exposure, foreign developments are shaping expectations in at least three ways: through counterparties and supply chains, through insurers and auditors, and through evolving evidentiary standards for what counts as “reasonably appropriate” AI governance.
6.1 EU AI Act
6.1 EU AI Act
The EU AI Act introduces a risk tiered regime with strict obligations for “high risk” AI systems, backed by penalties of up to 35 million euros or 7 percent of global annual turnover for certain breaches. Obligations include comprehensive risk management systems, data governance and documentation requirements, transparency measures, human oversight provisions, robustness and accuracy obligations, and post market monitoring, with detailed technical documentation specifications in Annex IV.
For Australian boards, the relevance is twofold. First, multinational groups operating in or supplying to the EU will insist that Australian subsidiaries and counterparties align to EU equivalent standards, effectively importing EU AI Act expectations through contracts, procurement questionnaires and vendor risk assessments. Second, as the Act beds down, courts and regulators elsewhere may treat its requirements, alongside NIST and ISO frameworks, as evidence of what is reasonably practicable for sizeable organisations deploying AI in high risk contexts.
In practice, this means that even domestically focused entities may find themselves compared, implicitly or explicitly, with EU AI Act benchmarks when explaining their AI governance approach to investors, auditors and regulators.
6.2 US enforcement
6.2 US enforcement
In the United States, enforcement has begun along two axes that are directly relevant to directors: AI washing and AI mediated discrimination.
On the marketing and disclosure front, the US Securities and Exchange Commission’s 2024 enforcement actions against Delphia (USA) Inc. and Global Predictions Inc. for misleading AI related statements demonstrate how existing securities and marketing rules can be applied without any AI specific statute. The SEC alleged that the firms overstated the role and sophistication of AI in their investment processes, treating those claims as potentially deceptive under the Investment Advisers Act and related rules.
On the employment and consumer protection front, the Equal Employment Opportunity Commission’s 2023 settlement in EEOC v iTutorGroup, widely reported as the agency’s first AI related hiring case, and the ongoing Mobley v Workday litigation illustrate how AI mediated decisions can trigger anti discrimination law. In iTutorGroup, the EEOC alleged that recruiting software was programmed to automatically reject older applicants, leading to a consent decree and financial settlement. In Mobley, a US federal court has allowed a class action against Workday, alleging algorithmic hiring bias, to proceed past key procedural hurdles, with the plaintiffs arguing that Workday acts as an agent for employer customers who remain on the hook for discriminatory outcomes.
These developments provide ASIC and other Australian regulators with ready made analogies and enforcement narratives, further eroding any claim that AI related harms are too novel to be addressed through existing law.
Footnote
Mobley v Workday, Inc (N.D. Cal., No. 3:23 cv 00770). On 16 May 2025, the court granted conditional collective certification of the plaintiff’s Age Discrimination in Employment Act claims; on 6 March 2026, it rejected Workday’s argument that ADEA disparate impact protections do not extend to job applicants. See, e.g., HR Dive and academic commentary summarised in Miami Law Review, “Mobley v Workday and the Legal Limits of AI Hiring” (2026).
6.3 Cross border expectations and de facto standards
6.3 Cross border expectations and de facto standards
International developments also influence what insurers, auditors and large counterparties expect to see when they review AI governance. Industry commentary suggests that NIST’s AI Risk Management Framework, its Generative AI Profile (AI 600 1), and ISO/IEC 42001 are increasingly being used as reference points in AI risk assessments, even where regulators have not mandated them. In practice, this translates into due diligence questions such as:
- Which AI risk management framework does your organisation align to?
- How do you evidence compliance against NIST AI RMF or ISO/IEC 42001 control points?
- What independent assurance have you obtained over high risk AI systems?
For Australian boards, the effect is that these frameworks begin to function as de facto standards: if a reasonably comparable firm can show alignment to NIST AI RMF or hold ISO/IEC 42001 certification, it becomes harder to argue that such measures are impracticable. Even absent a domestic AI Act, international practice therefore raises the evidentiary bar for what counts as “reasonably appropriate” AI governance in sophisticated institutions.
7. Conclusion: safe, secure and resilient AI as a fiduciary obligation
It is mid 2026 and three propositions are difficult to dispute.
First, AI is no longer peripheral. It is embedded in credit models, claims handling, pricing, trading, customer engagement, hiring, legal and professional services, and public administration. Studies such as KPMG and the University of Melbourne’s Trust, Attitudes and Use of AI confirm that a majority of employees are already using AI at work, often outside formal governance, and that public expectations for regulation are high.
Second, regulators have been explicit that AI sits within existing duties, not outside them. ASIC’s Chair has twice told directors, on the record, that AI is “one of a range of issues that directors must pay attention to, as part of their existing duties”, and that regulators will use current powers “more boldly and imaginatively” rather than waiting for new AI law. APRA’s CPS 230, OAIC’s AI privacy guidance and the National AI Plan’s pivot away from mandatory guardrails all reinforce the message: technology neutral enforcement, not AI exceptionalism.
Third, the enforcement architecture is in place. RI Advice has demonstrated that failure to implement reasonably appropriate controls for foreseeable technology risk can breach licence obligations and, by extension, expose directors via stepping stones reasoning. Internationally, the EU AI Act, US SEC AI washing cases, and AI related litigation such as Mobley v Workday and Moffatt v Air Canada provide concrete templates for how courts and regulators treat AI mediated harms under existing law.
Against this backdrop, “I did not know” is not a defence; it is an indictment of the governance system. The defensible questions for directors are now different: What did you know about AI use in your organisation? What reasonably appropriate systems did you insist on? How did you test and assure them? When did you respond as the risk profile changed?
Safe, secure and resilient AI adoption is no longer merely an innovation or competitiveness issue; it has become a fiduciary obligation. Boards that treat AI governance as integral to risk management, culture and strategy, by building inventories, assigning clear ownership, aligning to frameworks such as NIST AI RMF and ISO/IEC 42001, and demanding hard evidence rather than glossy demos, will be well placed to demonstrate that they exercised due care and diligence. Boards that do not are gambling that their AI exposure will remain invisible long enough to fix later.
Regulators have told directors what is expected. The litigation pipeline is forming. The frameworks are public. The question is no longer whether AI sits inside directors’ duties, ASIC has answered that, but whether boards can show, with evidence, that they took it seriously while there was still time to choose.
References
Australian Institute of Company Directors, & Human Technology Institute. (2024). *A director’s guide to AI governance*. [AICD](https://www.aicd.com.au/content/dam/aicd/pdf/tools-resources/director-resources/a-directors-guide-to-ai-governance-web.pdf) Australian Institute of Company Directors. (2024, June 11). *A director’s introduction to AI and guide to AI governance*. [AICD](https://www.uts.edu.au/news/2024/06/directors-introduction-ai-and-guide-ai-governance) Australian Institute of Company Directors. (2026, April 16). *Directors brace for rising costs, AI risk and global volatility* (Director Sentiment Index 1H 2026). [AICD](https://www.aicd.com.au/news-media/media-releases/2026/director-sentiment-index-1h26.html) Australian Institute of Company Directors. (2026, April 15). *Director Sentiment Index* (1H 2026). [AICD](https://www.aicd.com.au/news-media/research-and-reports/director-sentiment-index.html) Australian Securities and Investments Commission. (2022, May 4). *Court finds RI Advice failed to adequately manage cyber security risks* (Media Release 22‑104MR). [ASIC](https://asic.gov.au/about-asic/news-centre/find-a-media-release/2022-releases/22-104mr-court-finds-ri-advice-failed-to-adequately-manage-cyber-security-risks/) Australian Securities and Investments Commission. (2025, July 22). Longo, J. *AI: A blueprint for better banking?* (Speech to the Australian Banking Association). [ASIC](https://www.asic.gov.au/about-asic/news-centre/speeches/ai-a-blueprint-for-better-banking/) Australian Securities and Investments Commission v RI Advice Group Pty Ltd [2022] FCA 496. Judgment PDF via [ASIC](https://download.asic.gov.au/media/zhodijpp/22-104mr-2022-fca-496.pdf) Board Intelligence. (2026, March 17). *The AI readiness gap: What the data reveals about boards and AI*. [Board Intelligence](https://www.boardintelligence.com/blog/the-ai-readiness-gap-what-the-data-reveals-about-boards-and-ai) Community Directors. (2023, October 1). *How an AI governance framework can help*. [Institute of Community Directors Australia](https://www.communitydirectors.com.au/help-sheets/artificial-intelligence-and-governance) Deloitte to refund government after using AI in $440k report. (2025, October 7). *Accounting Times*. [Accounting Times](https://www.accountingtimes.com.au/technology/deloitte-to-refund-government-after-using-ai-in-440k-report) Dentons. (2024, February 14). *Airline ordered to compensate a B.C. man because its chatbot provided inaccurate information* (Moffatt v. Air Canada 2024 BCCRT 149). [Dentons Data](https://www.dentonsdata.com/airline-ordered-to-compensate-a-b-c-man-because-its-chatbot-provided-inaccurate-information/) European Parliament, & Council of the European Union. (2024). *Regulation (EU) …/… laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)* (EU AI Act). [EU AI Act](https://artificialintelligenceact.eu/article/9/) Grant Thornton LLP. (2026, April 12). *Grant Thornton survey: A widening “AI proof gap” is emerging, but well‑governed companies have the advantage* (2026 AI Impact Survey). [Grant Thornton](https://www.grantthornton.com/services/advisory-services/artificial-intelligence/2026-ai-impact-survey) KPMG, & University of Melbourne. (2025). *Trust, attitudes and use of artificial intelligence: A global study* (Global report and Australia snapshot). [KPMG](https://assets.kpmg.com/content/dam/kpmgsites/xx/pdf/2025/05/trust-attitudes-and-use-of-ai-global-report.pdf) National Association of Corporate Directors. (2025, March 10). *Boardroom tool: Questions for directors to ask about AI*. [NACD](https://www.nacdonline.org/all-governance/governance-resources/governance-research/director-handbooks/DH/2025/ai-in-cybersecurit) National Association of Corporate Directors. (2025, July 27). *2025 public company board practices & oversight survey* (AI oversight section). [NACD](https://www.nacdonline.org/all-governance/governance-resources/governance-surveys/surveys-benchmarking/2025-public-company-board) National Institute of Standards and Technology. (2023, January 25). *Artificial intelligence risk management framework (AI RMF 1.0)* (NIST AI 100‑1). [NIST](https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf) National Institute of Standards and Technology. (2024, July 26). *Artificial intelligence risk management framework: Generative artificial intelligence profile* (NIST AI 600‑1). [NIST](https://www.nist.gov/itl/ai-risk-management-framework) Obermeyer, Z., Powers, B, Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. *Science, 366*(6464), 447–453. [Science](https://doi.org/10.1126/science.aax2342) PwC Australia. (n.d.). *Artificial intelligence: What directors need to know*. [PwC](https://www.pwc.com.au/pdf/artificial-intelligence-what-directors-need-to-know.pdf) Royal Commission into the Robodebt Scheme. (2023, July 7). *Report of the Royal Commission into the Robodebt Scheme* (Final report). [Australian Government](https://www.pm.gov.au/media/final-report-royal-commission-robodebt-scheme) UpGuard. (2025, November 10). *The state of shadow AI: Trends, insights & statistics*. [UpGuard](https://www.upguard.com/resources/the-state-of-shadow-ai) ISO. (2023). *Information technology—Artificial intelligence—Management system* (ISO/IEC 42001:2023). [ISO](https://www.iso.org/standard/81230.html)