CPS 230 Is Not an AI Standard. But It Governs Everything AI Touches.

APRA's Prudential Standard CPS 230 (Operational Risk Management), which came into full effect on 1 July 2025, does not mention artificial intelligence by name. It does not need to. The standard applies to all material business operations, all critical processes, and all third-party service arrangements that underpin them. The moment you deploy AI into a loan decisioning workflow, a fraud detection pipeline, a claims triage process, or a customer onboarding system, CPS 230 applies to the technology, the people operating it, and the vendor that built it.

This matters because the regulatory posture has shifted. Before CPS 230, APRA's operational risk requirements were spread across multiple standards with varying levels of prescription. CPS 230 consolidates them into a single, principles-based framework that expects entities to identify, assess, manage, and monitor operational risk across the full scope of their operations. Norton Rose Fulbright's analysis of CPS 230 highlights that the standard imposes heightened expectations around critical operations, material service providers, and business continuity — all areas directly implicated when AI systems are introduced into regulated processes.

This article is not legal analysis. Norton Rose Fulbright, King & Wood Mallesons, and Allens have all published excellent regulatory commentary on CPS 230. What this article provides is an operational guide — written for the Compliance Officer, the Operations Manager, or the Head of Technology at a 50-to-500-person financial services firm who is asking a practical question: "We want to use AI. How do we do it without creating a regulatory problem?"

CPS 230 in Plain Language: What It Requires for AI and Automated Processes

CPS 230 rests on four pillars that directly affect how you deploy and manage AI systems.

1. Operational Risk Management

Every APRA-regulated entity must maintain a comprehensive framework for managing operational risk. When you introduce an AI system into a business process, you are introducing a new operational risk vector. The model might produce incorrect outputs. The training data might contain biases. The system might fail in ways that are not immediately visible to human operators. CPS 230 expects you to have identified these risks, assessed their likelihood and impact, and implemented controls to manage them — before the system goes live.

2. Critical Operations

Entities must identify and maintain a register of critical operations — those whose disruption could have a material impact on the entity or its customers. If an AI system supports a critical operation (and in financial services, most customer-facing AI does), the standard requires enhanced management, monitoring, and continuity planning for that system. This includes tolerance levels for disruption, recovery time objectives, and regular testing of continuity arrangements.

3. Material Service Provider Management

If your AI system is built, hosted, or maintained by a third party — and in most mid-market financial services firms, it is — that provider is likely a material service provider under CPS 230. The standard requires formal agreements, ongoing monitoring, performance reporting, and exit strategies for material service providers. This applies whether the provider is a major cloud platform, a specialist AI vendor, or a consulting firm that deployed and maintains the solution.

4. Business Continuity

CPS 230 requires entities to maintain credible business continuity plans that are regularly tested. For AI systems in critical operations, this means: What happens if the AI system fails? What is the manual fallback? How quickly can you revert? Has this been tested under realistic conditions? An AI system with no documented fallback process is a CPS 230 gap.

Which AI Automations Are High-Risk vs Low-Risk Under CPS 230

Not all AI deployments carry the same regulatory risk profile. The following table provides an operational classification based on whether the automation touches critical operations, makes or directly informs decisions affecting customers or capital, or involves material third-party dependencies.

Higher-Risk AI Applications (Require Comprehensive Controls)

  • Credit decisioning and loan assessment — Directly affects customers and capital allocation. Any automated component of a lending decision is a critical operation by definition.
  • Fraud detection and transaction monitoring — Supports AML/CTF obligations. False negatives create regulatory exposure; false positives affect customer experience at scale.
  • Claims triage and automated adjudication — For insurers, AI that triages or auto-approves claims touches policyholder outcomes and reserving. Both are material.
  • Customer risk scoring and segmentation — When risk scores drive pricing, product eligibility, or account actions, they are part of a critical operation.
  • Algorithmic trading and portfolio rebalancing — Directly affects capital and market conduct obligations.

Moderate-Risk AI Applications (Require Standard Controls)

  • Customer onboarding document verification — Supports KYC/AML processes. Errors create compliance risk, but human review typically remains in the loop.
  • Regulatory report generation — Automation of APRA and ASIC reporting can reduce errors, but the outputs have regulatory consequences if incorrect.
  • Customer complaint classification and routing — Affects response times and regulatory reporting (IDR/EDR), but does not directly determine outcomes.

Lower-Risk AI Applications (Standard Operational Controls Sufficient)

  • Internal knowledge management and search — AI-powered search across internal policies, procedures, and precedent documents. No customer impact, no decisioning.
  • Meeting summarisation and action tracking — Productivity automation for internal governance meetings. No regulatory exposure if outputs are reviewed.
  • Staff rostering and workforce scheduling — Operational efficiency tool with no direct customer or regulatory impact.
  • Marketing content generation and personalisation — Subject to general consumer law and financial services advertising obligations, but not CPS 230 critical.
  • Internal process documentation — Using AI to draft or update SOPs, training materials, or operational guides. Reviewed before use.

The classification above is operational, not legal. Your internal risk assessment may reclassify items based on your specific operations, scale, and risk appetite. The point is that not every AI deployment requires the same level of governance overhead — and failing to differentiate wastes resources and slows adoption of genuinely low-risk improvements.

Not sure where your AI use cases sit on the risk spectrum?

Our free AI Operations Audit maps your planned and current AI initiatives against your regulatory obligations and gives you a clear, prioritised implementation roadmap with compliance controls built in from the start.

Book your free audit →

5 Compliance-Safe Automations You Can Implement Now

While higher-risk AI applications require months of planning, governance design, and regulatory engagement, there are several high-value automations that financial services firms can deploy with standard operational controls. These fall into the lower-risk and moderate-risk categories and deliver measurable operational value without creating CPS 230 complications.

1. Regulatory Change Monitoring and Impact Assessment

The volume of regulatory change that Australian financial services firms must track — across APRA, ASIC, AUSTRAC, the Treasury, and state-level regulators — is substantial. AI-powered monitoring tools can scan regulatory publications, identify changes relevant to your entity, and draft initial impact assessments for human review. This reduces the risk of a change being missed entirely (a real and recurring problem in mid-market firms with small compliance teams) while keeping human judgment in the assessment loop.

Why it is safe: The AI identifies and summarises. Humans assess and decide. No customer impact, no automated decisioning.

2. Policy and Procedure Gap Analysis

Using AI to compare your current internal policies against regulatory requirements and identify potential gaps. For example: Does your current AML/CTF program address the most recent AUSTRAC guidance? Are your complaint handling procedures aligned with the current ASIC Regulatory Guide 271? This is analysis that compliance teams already do manually — AI accelerates it without changing the decision-making process.

Why it is safe: AI performs the comparison. Compliance staff validate findings and determine remediation priorities. The AI is a research tool, not a decision-maker.

3. Internal Audit Evidence Compilation

When internal or external audit requests evidence of compliance, the process of finding, compiling, and organising that evidence across multiple systems is labour-intensive. AI-assisted evidence retrieval can search across document management systems, email archives, and workflow tools to identify and compile relevant records. The compiled evidence is reviewed by audit staff before submission.

Why it is safe: No customer data is exposed externally. No decisions are automated. The AI reduces search and compilation time; humans validate completeness and relevance.

4. Customer Communication Drafting and Quality Assurance

Drafting routine customer communications — account notifications, disclosure updates, renewal reminders — and using AI quality assurance to check those communications against regulatory requirements and plain language standards. This is particularly valuable for firms subject to the Design and Distribution Obligations (DDO), where the appropriateness of customer communications is a regulatory focus area.

Why it is safe: AI drafts or reviews. Humans approve and send. The approval step maintains regulatory accountability.

5. Operational Incident Classification and Escalation

When operational incidents occur — system outages, processing errors, staff conduct issues — CPS 230 requires that they are identified, assessed, escalated appropriately, and reported to APRA where material. AI can classify incidents based on type and severity, route them to the appropriate response team, and flag those that meet APRA's materiality thresholds for potential notification. Human judgment remains in the escalation and reporting decision.

Why it is safe: AI classifies and routes. The notification decision — especially to APRA — remains with authorised humans. The automation reduces response time and the risk of an incident falling through the cracks.

Staffing the Compliance Function When Deploying AI

This is where most financial services firms get the implementation wrong, and it is the reason we are writing this article rather than leaving the topic entirely to law firms.

The standard approach to AI implementation in financial services looks like this: hire an AI consultancy or a data science team to build the solution, then hand it to the compliance team to validate that it meets regulatory requirements, then hand it to the operations team to use. Three separate groups, working sequentially, with minimal overlap in their expertise.

The result is predictable. The AI solution works technically but does not account for operational edge cases. The compliance review catches issues late, forcing expensive rework. The operations team receives a system they were not involved in designing and resists adopting it. Research consistently supports what we see in practice — studies published through MDPI and Prolific's research panels have found that domain experts achieve accuracy improvements of 30% or more compared to non-specialists when applied to complex, context-dependent tasks. AI implementation in a regulated environment is exactly this kind of task.

The alternative — and the approach that consistently produces better outcomes — is to staff AI projects with people who operate at the intersection of compliance knowledge and operational technology skills. Not pure technologists learning compliance from a briefing document. Not compliance officers trying to learn Python. Professionals who have worked in financial services operations, who understand the regulatory environment from direct experience, and who have then developed capability in AI tools and automation.

These are the people who can:

  • Design an AI workflow that accounts for regulatory requirements from the first iteration, not as a retrofit
  • Identify operational edge cases that pure technologists will miss because they have never processed a loan application or filed an APRA return
  • Communicate effectively with both the compliance committee and the development team
  • Manage the change process with operations staff who are understandably cautious about AI in a regulated environment

This is the core of our talent placement model. We have placed 254+ domain-expert professionals into enterprise roles across financial services, healthcare, government, and professional services — people who combine deep operational experience in their industry with practical capability in AI tools and automation. In financial services specifically, this means professionals who have worked in compliance, risk, operations, or lending, and who can lead AI implementation from a position of regulatory understanding rather than technological enthusiasm.

90-Day Implementation Timeline

For a mid-market financial services firm (50-500 staff) looking to deploy its first compliance-safe AI automations under a CPS 230-compliant framework, here is a realistic 90-day timeline.

Days 1-30: Assessment and Governance Foundation

  • Week 1-2: Conduct an operational assessment of current AI use (formal and informal — including staff using ChatGPT or similar tools without oversight), current compliance processes, and the existing operational risk framework. Identify where AI is already in use without formal governance and where the highest-value automation opportunities exist.
  • Week 3: Draft an AI governance framework aligned with CPS 230 requirements. This framework should define risk classification criteria for AI use cases (using the high/moderate/low taxonomy above as a starting point), approval processes for new AI deployments, ongoing monitoring requirements, and incident management procedures specific to AI systems.
  • Week 4: Present the governance framework to the board risk committee or equivalent governance body for endorsement. CPS 230 expects board-level oversight of operational risk management — AI governance should sit within this existing structure, not as a separate silo.

Days 31-60: Controlled Implementation

  • Week 5-6: Implement 2-3 lower-risk automations from the list above. Regulatory change monitoring and internal audit evidence compilation are typically the best starting points — they deliver immediate time savings, carry minimal regulatory risk, and give the organisation practical experience with AI governance without high-stakes consequences if something goes wrong.
  • Week 7-8: Monitor the deployed automations against the governance framework. Document any incidents, near-misses, or governance gaps identified. Use this operational experience to refine the governance framework before moving to higher-risk applications. Conduct a brief lessons-learned review with the project team and compliance function.

Days 61-90: Scaling and Moderate-Risk Deployment

  • Week 9-10: Based on the governance refinements from the first implementation cycle, begin planning and building the first moderate-risk automation. Customer onboarding document verification or complaint classification are typical next steps. These applications involve customer data and touch regulated processes, so the governance controls need to be more rigorous: model validation, bias testing, human-in-the-loop review processes, and documented fallback procedures.
  • Week 11-12: Deploy the moderate-risk automation in a controlled pilot — limited to a subset of the operation, with enhanced monitoring and a clear rollback plan. Document the pilot results, including any compliance-relevant observations, and prepare a board report covering the first 90 days of AI deployment: what was implemented, what was learned, what value was delivered, and what the roadmap looks like for the next quarter.

This timeline assumes the firm has access to people with the right combination of compliance and technology skills. If that capability does not exist internally — and in most mid-market firms, it does not — the 90 days should include sourcing that capability, either through direct hire or through a consulting engagement that builds internal capability while delivering the initial implementations.

Need compliance-literate AI implementation support?

We deploy domain experts who have worked in financial services operations and compliance — not generalist technologists learning your regulatory environment on the job. Our team has driven $30.3M+ in measurable enterprise outcomes across regulated industries.

Book a free AI Operations Audit →

The Talent Gap That CPS 230 Exposes

Australia faces a structural shortage of AI-capable professionals. Jobs and Skills Australia has projected a shortfall of 60,000 AI-related roles by 2027, while the Technology Council of Australia estimates 200,000 new AI-related positions will be needed by 2030. Fewer than 2,000 AI graduates emerge from Australian universities each year. The Australian AI market is growing at a CAGR of 26.25%, projected to reach USD 16.15 billion by 2031.

In financial services specifically, the shortage is compounded by the need for regulatory knowledge. An AI engineer who has built excellent models in retail or logistics cannot simply transfer into a CPS 230-regulated environment without a significant learning curve. The compliance requirements, the stakeholder dynamics, the risk appetite frameworks, the board reporting expectations — these are domain-specific and they take time to learn.

This is why we built our model around training and placing domain experts rather than pure technologists. When you deploy a professional who already understands financial services operations and has then been trained in AI tools and implementation methodology, the productivity advantage is significant — our experience shows these professionals reach productive contribution within 2 weeks, compared to the 3 months typically required for a generalist to become effective in a new domain.

For financial services firms navigating CPS 230 and AI simultaneously, this approach resolves the staffing challenge without the 6-12 month lead time of building the capability organically through graduate hiring and internal training programs.

Common Mistakes to Avoid

Based on our experience working with Australian enterprises across regulated industries, these are the most common mistakes financial services firms make when deploying AI under CPS 230.

  • Treating AI governance as a separate function. AI governance should sit within your existing operational risk management framework, not alongside it. CPS 230 already requires you to manage operational risk comprehensively — AI is a subset of that, not a parallel stream.
  • Over-governing low-risk applications. Applying enterprise-grade governance to a meeting summarisation tool or an internal knowledge search creates bureaucratic overhead that slows adoption without reducing material risk. Differentiate your governance requirements by risk tier.
  • Under-governing shadow AI. Staff in every financial services firm are already using AI tools — ChatGPT, Copilot, Claude, Gemini — for tasks ranging from drafting emails to analysing data. If this usage is not visible to the compliance function, it is an unmanaged operational risk under CPS 230. Acknowledge it, assess it, and create an acceptable use framework rather than pretending it is not happening.
  • Choosing vendors without exit planning. CPS 230 requires exit strategies for material service providers. If your AI vendor relationship becomes material — and if the AI system supports a critical operation, it almost certainly is — you need a documented plan for transitioning away from that vendor if required. This should be part of vendor selection, not an afterthought.
  • Skipping human-in-the-loop for customer-affecting decisions. Until the regulatory environment matures and APRA provides more specific guidance on automated decisioning, maintaining human oversight of AI-informed decisions that affect customers is the safest operational posture. This does not mean every decision needs manual review — it means the review framework, the escalation triggers, and the override capability must be designed into the system from the start.

Frequently Asked Questions

Does CPS 230 specifically regulate AI?

No. CPS 230 is a principles-based operational risk management standard. It does not mention AI specifically. However, it applies to all material business operations and critical processes — which includes any process where AI is used. The practical effect is that AI systems deployed in APRA-regulated entities must be governed within the CPS 230 operational risk management framework, including risk assessment, monitoring, incident management, and business continuity planning.

Do we need APRA approval before deploying AI?

CPS 230 does not require prior APRA approval for deploying AI systems. However, if the AI system supports a critical operation or involves a material service provider arrangement, the standard requires that the entity's board (or a board-delegated committee) has oversight of the associated risks. In practice, higher-risk AI deployments should go through your internal risk assessment and governance approval processes before deployment. If you are unsure whether a specific deployment requires enhanced governance, consult your legal advisors.

What AI automations can we implement immediately without CPS 230 concerns?

Internal productivity tools that do not touch customer data, do not make or inform decisions affecting customers, and do not support critical operations can typically be deployed with standard operational controls. Examples include internal knowledge search, meeting summarisation, document drafting for internal use, and staff scheduling. The key test is: if this system failed completely, would it affect customers or the entity's ability to meet regulatory obligations? If the answer is no, standard controls are sufficient.

How does CPS 230 affect our use of third-party AI platforms like OpenAI or Google Cloud AI?

If the AI platform supports a critical operation or the entity has a material dependency on it, the platform provider is likely a material service provider under CPS 230. This triggers requirements for formal service agreements, ongoing monitoring, performance reporting, and exit planning. Even if the platform is accessed through an intermediary or consulting firm, the entity retains responsibility for managing the operational risk. Due diligence on data residency, security, and the provider's own operational resilience is essential.

What should our AI governance framework include at minimum?

At minimum: a risk classification system for AI use cases, defined approval processes for each risk tier, ongoing monitoring and performance metrics, incident management procedures, human-in-the-loop requirements for customer-affecting decisions, vendor management requirements for third-party AI providers, documentation and audit trail standards, and regular reporting to the board risk committee. This framework should sit within your existing CPS 230 operational risk management arrangements, not as a standalone document.

How much does it cost to implement CPS 230-compliant AI governance?

The governance framework itself — policies, procedures, risk assessments, board reporting templates — can typically be developed in 4-6 weeks with the right expertise. The cost of implementation depends on the number and complexity of AI applications, the current maturity of your operational risk framework, and whether you have the right people internally. For a mid-market firm deploying its first 3-5 AI automations, a realistic budget for the governance framework, initial implementations, and first-quarter monitoring is in the range of $30,000-$80,000 for a comprehensive consulting engagement. This includes the governance design, the automation builds, and the knowledge transfer to your team. You can explore our AI consulting services for more detail on what is included at each engagement level.

Where can I find more information on CPS 230 requirements?

APRA's official CPS 230 standard and supporting guidance are published on the APRA website. For detailed legal analysis, Norton Rose Fulbright, King & Wood Mallesons, and Allens Linklaters have all published comprehensive commentary. For operational implementation guidance specific to AI, our case studies and analysis of AI project failure patterns provide practical context beyond the regulatory text.