Data Sovereignty

Why Australian Fund Managers Can't Use ChatGPT for Client Documents

BackPro AI··8 min read

The Deloitte Incident Changed the Conversation

In October 2025, Deloitte refunded the Australian Government after a $439,000 report was found to contain fabricated academic references, invented experts, and a hallucinated quote attributed to a Federal Court judge.

The report — an independent assurance review commissioned by the Department of Employment and Workplace Relations — had been generated using Azure OpenAI GPT-4o. Deloitte had not disclosed the use of AI. University of Sydney researcher Dr Chris Rudge identified citations to non-existent papers falsely linked to the University of Sydney and Sweden's Lund University.

Deloitte is a Big Four firm with dedicated AI governance teams, quality assurance processes, and hundreds of partners reviewing output. If their controls failed to catch fabricated citations in a government report, the question for every Australian fund manager is straightforward: what happens when the same technology hallucinates in a DDQ response, an investor report, or a compliance document?

Five Reasons ChatGPT Does Not Work for Fund Management Documents

1. Your Data Goes to the United States

When a fund manager pastes a DDQ question, an investor query, or a compliance document into ChatGPT, that data is transmitted to OpenAI's servers in the United States.

OpenAI's privacy policy states that personal data is processed "in the United States or in countries where their affiliates, partners, vendors and service providers are located." Data residency options exist for Enterprise customers in Europe and Asia — but not in Australia.

Under the Privacy Act 1988, Australian Privacy Principle 8 (APP 8) governs cross-border disclosure of personal information. Before disclosing personal information to an overseas recipient, the fund must take reasonable steps to ensure the recipient handles information consistently with the APPs. Critically, the disclosing entity remains legally accountable for any breach by the overseas recipient — even if it took reasonable steps.

The Office of the Australian Information Commissioner (OAIC) has issued direct guidance on this point:

"Organisations should not enter personal information, and particularly sensitive information, into publicly available generative AI tools, due to the significant and complex privacy risks involved."

For a fund manager handling investor personal information, beneficial ownership details, and AML/KYC documentation in DDQ responses, sending this data to a US-based AI service creates a cross-border disclosure that triggers APP 8 obligations — obligations that are difficult to satisfy given OpenAI's data handling terms.

2. OpenAI Trains on Your Inputs by Default

By default, conversations with ChatGPT are used to improve OpenAI's models. Inputs are retained, a subset undergoes human review, and the data is fed into reinforcement learning pipelines.

Users can opt out via Settings > Data Controls, and Enterprise/API customers with zero-data-retention policies are exempt. But for any fund manager using ChatGPT Plus, Team, or free tiers — which is how most staff access the tool informally — your DDQ content, portfolio positions, and investment strategy language is being processed for model training.

Consider what fund management documents contain:

  • DDQ responses with investment strategy descriptions, risk management frameworks, fee structures, and portfolio construction methodologies
  • Investor reports with performance attribution, market outlook, and position-level commentary
  • Compliance documents with beneficial ownership information, AML/KYC details, and regulatory filing data

This is commercially sensitive intellectual property. Investment strategies, portfolio positions, and analytical frameworks embedded in these documents are the fund's competitive advantage. Processing them through a service that handles millions of requests daily — including from competitors — is an intellectual property risk that no investment committee should accept.

3. OpenAI's Own Terms Say Do Not Rely on It

OpenAI's Terms of Use contain explicit disclaimers that should give any regulated entity pause:

"You should not rely on Output as a sole source of truth or factual information, or as a substitute for professional advice."

"Given the probabilistic nature of machine learning, use of the Services may result in Output that does not accurately reflect real people, places, or facts."

"Output may not always be accurate... We do not warrant that the Services will be uninterrupted, accurate or error free."

The services are provided "AS IS" with all warranties disclaimed — including merchantability and fitness for purpose.

In practical terms: if ChatGPT fabricates a compliance statement in a DDQ response, misrepresents a fund's performance history in an investor report, or generates an incorrect regulatory disclosure, OpenAI's own terms say that is not their problem. The fund manager bears the full regulatory and legal consequence.

This is not a theoretical concern. The fabrication pattern documented in the Deloitte incident — plausible-sounding but entirely invented citations — is exactly the kind of hallucination that is hardest to catch in financial documents. A fabricated regulatory reference in a DDQ response looks correct. A hallucinated performance figure sits alongside real data. The output is fluent and confident, which is precisely what makes it dangerous.

4. ASIC Is Already Watching

In October 2024, ASIC published Report 798: Beware the gap — Governance arrangements in the face of AI innovation. After reviewing AI use across 23 AFS and credit licensees — covering 624 AI use cases — ASIC found that:

  • Licensees are adopting AI faster than they are updating risk and compliance frameworks, creating a governance gap
  • One licensee used an AI credit-scoring model described as a "black box" with no transparency about which variables influenced scores
  • AFS licensees must provide services "efficiently, honestly and fairly" under s912A of the Corporations Act — regardless of whether AI is used in the process
  • Third-party AI models must meet the same governance standards as internally developed ones

For fund managers, the implications are direct. If a DDQ response is generated using ChatGPT, the fund manager remains responsible for the accuracy of that response. If an investor report contains AI-generated analysis that misrepresents fund performance, the fund manager bears the regulatory consequence — not OpenAI.

ASIC's RG 104 (AFS Licensing: Meeting the General Obligations) reinforces this: the licensee remains responsible for any function it outsources, including data processing. Adequate oversight must be maintained. Business continuity must be assured.

Using ChatGPT for client documents without formal governance, audit trails, and quality assurance frameworks is precisely the kind of governance gap ASIC identified in REP 798.

5. APRA's Prudential Standards Apply to Your AI Tools

For fund managers whose activities fall under APRA supervision — or whose institutional clients (super funds, insurers) impose APRA-aligned requirements — the prudential standards create additional obligations.

CPS 234 (Information Security) requires that information assets are protected commensurate with their sensitivity. Client data processed by an AI tool is an information asset. If that processing happens via a third-party cloud service, CPS 234 triggers specific due diligence, access control, and incident reporting obligations.

CPS 230 (Operational Risk Management) requires entities to manage risks from service provider dependencies — including AI providers. If ChatGPT experiences an outage, changes its terms of service, or modifies its data retention policies, the fund manager must have continuity plans in place.

APRA Member Therese McCarthy Hockey stated at the AFIA Risk Summit 2024:

"Companies cannot delegate full responsibility to an AI program."

"As AI algorithms become more complex and the systems more autonomous and opaque, detecting when, how and why the technology's analysis is off-track will become increasingly difficult."

For fund managers serving APRA-regulated clients, using ChatGPT for document generation creates a compliance dependency that APRA has explicitly warned against.

The Pattern That Should Concern Fund Managers

The documented hallucination cases follow a consistent pattern:

  • Mata v. Avianca (2023, US): Lawyers cited six entirely fabricated cases generated by ChatGPT in a federal court filing. The court imposed sanctions and found "subjective bad faith."
  • Arizona Social Security case (2024, US): A lawyer was sanctioned after 12 of 19 cited cases were fabricated, misleading, or unsupported.
  • California appellate case (2025, US): An attorney was fined $10,000 after 21 of 23 quotes from cited cases were entirely fabricated by ChatGPT.
  • Deloitte-DEWR report (2025, Australia): Fabricated academic references in a $439,000 government report, resulting in a refund.

Every one of these cases involves professionals who assumed the AI output was accurate because it was fluent and well-structured. The hallucinations were not obvious errors — they were plausible fabrications that required domain expertise to identify.

Fund management documents carry the same risk. A fabricated regulatory reference in a DDQ response, an incorrect performance figure in an investor report, or a hallucinated compliance statement in a regulatory filing would not look like an error. It would look like a fact — until a regulator, auditor, or allocator checks it.

What Fund Managers Actually Need

The answer is not to avoid AI. The operational pressure on fund managers — DDQ volumes, investor reporting deadlines, compliance documentation requirements — makes AI adoption necessary for competitive viability.

The answer is AI that is built for the regulatory environment fund managers operate in:

On-premise deployment — the AI runs inside your infrastructure. No data leaves your environment. No cross-border transfer. No third-party data processing. APP 8 does not apply because there is no cross-border disclosure.

Source attribution — every AI-generated response is traceable to a specific paragraph in a specific document. When an allocator or auditor asks where a DDQ answer came from, the source is cited and verifiable.

Audit trails — every input, output, and human review action is logged within your systems. ASIC's governance expectations and APRA's CPS 234 requirements are met through your existing infrastructure controls.

Domain specificity — the AI understands DDQ formats, GIPS-compliant reporting structures, and Australian regulatory requirements. It does not generate generic text — it synthesises answers from your fund's actual documents.

Learn how on-premise DDQ automation works for Australian fund managers.

The Decision Is Risk Management

Every fund manager evaluating AI faces the same decision framework:

  1. Does the AI process client or investor data? For any useful application, yes.
  2. Can you satisfy Privacy Act, ASIC, and APRA obligations with a US-based cloud AI service? Given OpenAI's data handling terms, training policies, and accuracy disclaimers — that is a significant compliance burden.
  3. Does the AI provide source attribution and audit trails? ChatGPT does not. It generates text without citing sources, without version control, and without approval workflows.

For Australian fund managers operating under ASIC and APRA oversight, managing investor capital, and responding to institutional allocators who conduct their own due diligence on your operations — the risk profile of ChatGPT for client documents does not align with your regulatory obligations.

Book a demo to see how on-premise AI handles DDQs, investor reports, and compliance documents for Australian fund managers.

Related reading: DDQ Automation for Fund Managers | Why On-Premise AI Is Non-Negotiable for Australian Financial Services