CPS 234 Was Not Written for AI — But It Applies to Every AI Tool You Use
APRA Prudential Standard CPS 234 (Information Security) came into force on 1 July 2019. It was designed to ensure APRA-regulated entities maintain information security commensurate with the size and extent of threats to their information assets.
CPS 234 does not mention artificial intelligence. It does not need to. The standard applies to information assets — and every piece of data processed by an AI tool is an information asset. When a fund manager pastes a DDQ into ChatGPT, when an insurer feeds claims documents into a cloud AI service, or when a super fund uses a generative AI tool to draft member communications, CPS 234 applies.
With CPS 230 (Operational Risk Management) third-party requirements taking full effect from 1 July 2025, the obligations around AI tool adoption are tightening. Firms that adopted AI tools informally — staff using ChatGPT on personal accounts, teams experimenting with cloud AI services — now face a compliance gap that APRA expects to be closed.
This checklist covers what Australian financial services firms must verify before any AI tool touches client data.
The Checklist
1. Information Asset Classification
CPS 234 requirement: An APRA-regulated entity must classify its information assets by criticality and sensitivity (CPS 234, paragraph 15).
What this means for AI tools:
Every category of data that flows into or out of an AI tool must be classified:
- Client personal information (names, addresses, TFNs, beneficial ownership details) — highest sensitivity
- Financial data (portfolio positions, performance figures, fee structures, investment strategies) — commercially sensitive
- Compliance documents (DDQ responses, regulatory filings, audit artifacts) — regulatory sensitivity
- Internal operational data (meeting minutes, policy documents, procedures) — moderate sensitivity
Checklist items:
- All data categories processed by the AI tool are documented and classified
- Classification aligns with your existing information security policy
- The AI tool's data handling matches the classification level — highly sensitive data requires stronger controls
- Data classification is reviewed when the AI tool's scope expands (e.g., adding new document types or data sources)
Common failure: Firms classify their CRM and administration data but forget to classify AI inputs and outputs as separate information assets. If a staff member pastes client data into an AI tool, that interaction is an information asset — even if the AI tool is not formally part of your technology stack.
2. Third-Party Risk Assessment
CPS 234 requirement: Where information assets are managed by a related party or third party, the entity must assess the information security capability of that party commensurate with the potential consequences of an information security incident (CPS 234, paragraphs 21-23).
What this means for AI tools:
Any AI service that processes your data externally is a third party under CPS 234. This includes:
- Cloud AI APIs (OpenAI, Google Gemini, Anthropic API, Microsoft Copilot)
- SaaS platforms with embedded AI (document processing tools, CRM add-ons with AI features)
- Consulting firms using AI on your data (see the Deloitte incident where undisclosed AI use in a government report resulted in fabricated references and a contract refund)
Checklist items:
- Every AI tool that processes your data is identified and documented as a third-party provider
- Due diligence has been conducted on each provider's information security posture
- The provider's data processing locations are documented (note: OpenAI processes data in the United States with no Australian data residency option)
- The provider's data retention and deletion policies are documented and acceptable
- The provider's model training practices are understood — does it train on your inputs? (OpenAI trains on inputs by default for free, Plus, and Team tiers)
- The provider's sub-processors are identified and assessed
- Contractual arrangements address information security requirements, incident notification, and audit rights
- The assessment is reviewed at least annually, or when the provider's terms change
Common failure: Staff adopt AI tools without IT or compliance involvement. A portfolio analyst uses ChatGPT to draft DDQ responses. A compliance officer uses an AI summarisation tool on regulatory documents. These are third-party data processing relationships under CPS 234, even if no formal procurement occurred.
The on-premise alternative: On-premise AI deployment eliminates this entire category of third-party risk. When the AI model runs inside your own infrastructure — your Azure tenancy, your AWS account, your data centre — there is no third party to assess. CPS 234's third-party provisions do not apply because there is no third-party data processing.
3. Access Controls
CPS 234 requirement: An APRA-regulated entity must restrict access to information assets to authorised personnel through appropriate access controls (CPS 234, paragraph 17).
What this means for AI tools:
- Who can use the AI tool?
- What data can each user access through the AI tool?
- Are access permissions aligned with your existing role-based access controls?
Checklist items:
- AI tool access is managed through your existing identity and access management (IAM) framework
- Role-based access controls determine what data each user can process through the AI tool
- Access is provisioned and de-provisioned through the same processes as other information systems
- Privileged access to the AI tool's configuration and training data is restricted and logged
- Multi-factor authentication is enforced for AI tool access
- Access reviews include the AI tool in their scope
Common failure: Cloud AI tools operate outside your IAM framework. When a staff member accesses ChatGPT with a personal account, your access controls do not apply. You cannot restrict what data they input, you cannot audit their usage, and you cannot revoke access through your standard processes.
4. Logging and Audit Trails
CPS 234 requirement: An APRA-regulated entity must have mechanisms in place to detect and respond to information security incidents in a timely manner, supported by appropriate logging (CPS 234, paragraphs 24-27).
What this means for AI tools:
Every interaction with an AI tool that processes client data must be logged within your controlled environment:
- What data was input to the AI
- What output the AI generated
- Who initiated the interaction
- When the interaction occurred
- What actions were taken on the AI's output (approved, rejected, modified)
Checklist items:
- All AI tool interactions involving client data are logged
- Logs are stored within your controlled infrastructure (not solely in the AI provider's environment)
- Logs include sufficient detail for incident investigation and regulatory reporting
- Log retention periods meet APRA's expectations and your internal policies
- Logs are included in your security monitoring and alerting framework
- AI-generated outputs are version-controlled with approval workflows
Common failure: Cloud AI tools maintain their own logs, but you may not have access to them — or they may be purged according to the provider's retention schedule, not yours. If APRA asks for an audit trail of how a specific document was generated, you need that trail within your systems.
5. Incident Response
CPS 234 requirement: An APRA-regulated entity must notify APRA of material information security incidents no later than 72 hours after becoming aware (CPS 234, paragraph 28). For incidents involving a third party, the entity must also notify APRA of its assessment of the third party's response.
What this means for AI tools:
If a cloud AI provider experiences a data breach that affects your data, you must:
- Detect that the incident has occurred (which requires the provider to notify you)
- Assess the impact on your information assets
- Notify APRA within 72 hours
- Assess and report on the third party's response
Checklist items:
- Your incident response plan includes scenarios involving AI tool security incidents
- Contractual arrangements with AI providers include timely incident notification obligations
- You have a process to assess the impact of a provider-side incident on your information assets
- APRA notification procedures are documented and tested for AI-related incidents
- Incident response exercises include AI tool scenarios
Common failure: Most cloud AI providers' terms of service do not include incident notification obligations that match APRA's 72-hour requirement. If OpenAI experiences a breach affecting your data, their standard terms do not guarantee they will notify you in time for you to notify APRA.
6. Information Security Capability
CPS 234 requirement: An APRA-regulated entity must maintain an information security capability commensurate with the size and extent of threats to its information assets (CPS 234, paragraphs 10-14). This includes clearly defined roles and responsibilities.
What this means for AI tools:
- Does your team have the expertise to assess AI-specific security risks?
- Are AI tools included in your vulnerability assessments and penetration testing?
- Is there clear ownership of AI tool security within your organisation?
Checklist items:
- AI tools are included in your information security risk assessment
- Roles and responsibilities for AI tool security are clearly defined (who owns the risk?)
- Your security team has the capability to assess AI-specific threats (prompt injection, data extraction, model manipulation)
- AI tools are included in your vulnerability management and testing programs
- Board or senior management reporting includes AI tool risk assessment
Common failure: AI security is treated as a technology risk rather than an information security risk. CPS 234 does not distinguish between the two — if an AI tool processes information assets, it falls within the standard's scope, and your information security capability must extend to cover it.
7. Testing
CPS 234 requirement: An APRA-regulated entity must test the effectiveness of its information security controls through a systematic testing program (CPS 234, paragraphs 18-20). Testing must be conducted by appropriately skilled and functionally independent specialists.
What this means for AI tools:
- Are AI tools included in your security testing program?
- Has the AI tool been tested for information security risks specific to AI (data leakage through prompts, output manipulation, unauthorised access to training data)?
Checklist items:
- AI tools are included in your systematic security testing program
- Testing covers AI-specific risks: data leakage, prompt injection, output accuracy, access control bypass
- Testing is conducted by specialists with AI security expertise
- Testing frequency is commensurate with the sensitivity of data processed by the AI tool
- Testing results are reported to senior management and remediation is tracked
- Testing includes scenarios where the AI tool is unavailable (business continuity)
CPS 230: The Third-Party Deadline
CPS 230 (Operational Risk Management) introduces additional requirements for managing material service provider relationships. From 1 July 2025, APRA-regulated entities must:
- Identify and manage material service providers (which may include AI providers if the AI tool supports a critical business process)
- Maintain business continuity plans that address service provider disruptions
- Ensure service provider arrangements do not compromise the entity's ability to meet prudential obligations
For firms that have adopted cloud AI tools for document generation, compliance reporting, or client communications, CPS 230 requires a formal assessment of whether that AI provider is a material service provider — and if so, whether the arrangement meets APRA's expectations.
The Practical Path Forward
There are two approaches to CPS 234 compliance for AI tools:
Approach 1: Comply with third-party requirements. Conduct full CPS 234 due diligence on every cloud AI provider. Negotiate contractual terms covering data handling, incident notification, audit rights, and sub-processor management. Implement logging and monitoring of external AI interactions. Maintain ongoing assessment as provider terms change. This is achievable but creates significant ongoing compliance overhead.
Approach 2: Eliminate the third-party risk. Deploy AI on-premise, within your own infrastructure. The AI model runs on your compute, processes data within your environment, and operates under your existing information security controls. CPS 234's third-party provisions do not apply. Your existing access controls, logging, incident response, and testing frameworks extend to cover the AI tool as an internal system.
For most APRA-regulated entities, Approach 2 is operationally simpler. The compliance conversation shifts from "how do we manage this third-party risk" to "how do we secure this internal system" — something your information security team already does.
BackPro AI deploys entirely within your infrastructure. It generates CPS 234 compliance documentation, automates APRA reporting, and provides gap analysis against the standard's requirements — all without data leaving your environment. Every interaction is logged, source-attributed, and auditable within your systems.
Book a demo to see how on-premise AI meets CPS 234 requirements for your organisation.
Related reading: Why Australian Fund Managers Can't Use ChatGPT for Client Documents | Why On-Premise AI Is Non-Negotiable for Australian Financial Services | APRA Compliance Automation for Super Funds