Where does your client's data really go? What brokers need to know about AI and intellectual property

In the second in a four-part series, Zahid Bilgrami, CEO of Mortgage Brain, explores the pillars of Mortgage Brain's AI Charter: cost, intellectual property, consistency, and speed.

Related topics:  Blogs,  AI,  mortgage brain
Zahid Bilgrami | Mortgage Brain
5th May 2026
Zahid Bilgrami

Mortgage data is among the most sensitive personal financial information there is. Income, employment, dependents, debts, credit history, property details, identity documents. The kind of data that, in the wrong hands, causes real harm to real people.

So here is the question every broker firm should be able to answer, today: when you or your team put that data into an AI tool, where does it actually go?

If the honest answer is "I'm not entirely sure", you are not alone. But that is a problem.

What brokers need to understand about third-party AI

A significant number of the "AI-powered" tools being sold to brokers are, under the bonnet, sending client data through infrastructure owned by OpenAI, Google or Anthropic. Your vendor may have built a user interface. They may have branded it for the mortgage industry. But the heavy lifting, the bit where your client's data is actually processed, is happening somewhere else entirely.

What does this mean for brokers? It means that every time your firm runs a client case through one of these tools, you may be exporting sensitive personal financial data to a third party you have never contracted with, whose terms you have never read, and whose infrastructure you have no visibility of.

That is a governance question, not a technology question. One that brokers need to start owning directly.

The four questions every broker firm should already be able to answer

Before any AI system touches client data inside your firm, someone senior should be able to answer four things:

• Where is that data processed?
• Who has access to it?
• Could it be used to train the provider's AI model?
• What happens in the event of a data breach?

If your provider cannot answer all four clearly, in writing, your firm has a material exposure it has not properly reckoned with. 

The training data problem nobody is telling brokers about

Here is the part that tends to shock broker firms when they first hear it.

When a broker sends client data through a third-party AI system, there is a real risk that data, your clients' sensitive financial information, may be used to train someone else's AI model. It’s not always obvious in the terms and conditions. Many of the major providers reserve the right to use inputs for model training unless enterprise agreements explicitly prohibit it. And even then, the contractual protections are not always watertight.

How could this impact brokers?

Very seriously. Once client data has been absorbed into the training of a large general purpose AI model, it cannot be pulled back out. You cannot un-train a model. Which means that if the data should never have been there in the first place, you do not have a clean remediation path. You have a disclosable incident.

For a regulated firm operating under Consumer Duty, that is a major material risk that needs to be owned and answered at board level.

The jurisdiction issue brokers keep missing

There is another dimension to this that often gets overlooked in the broker conversation.

Data put into a public general purpose AI model can be processed in the US or elsewhere, entirely outside of UK or EU jurisdiction. That sounds like a technicality. It is not.

What does this mean for brokers in practice? 

A different legal framework applies to that data the moment it leaves UK or EU soil. Different regulators have oversight. Different rules govern what can be done with it, who can access it, and what your firm's recourse looks like if something goes wrong. Under UK GDPR, exporting personal data to a jurisdiction without adequate protection is a serious regulatory issue.

Many brokers assume that because they have signed a contract with a UK tech provider, their client data stays in the UK. That is not how it works. Your contract is with your provider. Your provider's contract is with the AI model company. And the AI model company's infrastructure may sit in a US data centre regardless of where you or your clients operate.

The FCA's direction of travel on AI oversight is clear. Broker firms with proper data governance frameworks in place now will be considerably better positioned as those requirements develop. Broker firms without them will be playing catch up in a much less forgiving environment.

What brokers need to know about the wider risks

Data sovereignty is the headline issue, but it is not the only one. When client data flows through third-party AI systems, broker firms need to understand that it becomes exposed to a wider range of risks than most realise:

• Manipulation. Prompt injection attacks and other AI-specific vulnerabilities can cause models to behave unpredictably or leak information they should not.

• Phishing and social engineering. Client data used to train or fine tune public models can surface in unexpected outputs, giving dubious people fresh material to work with.

• Unauthorised sharing. Terms that permit "service improvement" or "quality assurance" access can mean that employees, contractors or reviewers at the AI provider can see data your firm believed was private.

• Loss of control. Once data has been ingested by a third-party AI system, your firm's ability to locate it, retrieve it, correct it or delete it is severely limited, and in some cases, impossible.

For brokers, this means that the question is not just "is my provider reputable?". It is "what happens to my client's data the moment it leaves my firm, and can I prove – to a client, to the FCA, to my professional indemnity insurer – exactly where it has been?"

How Mortgage Brain handles this differently

Because Mortgage Brain builds and runs its own AI, none of the above applies to data shared with us. Client data exists only within Mortgage Brain's systems. It is not passed to OpenAI, Google or Anthropic. It is not processed outside UK or EU jurisdiction. It is not at risk of being used to train someone else's model. There are guardrails in place to make sure it goes nowhere else.

For broker firms, that turns a sprawling, hard-to-govern risk into a simple, answerable one. Your firm's data sits with us. Full stop.

We also hold a vast amount of proprietary industry data - lender policy criteria, product pricing structures, decades of market knowledge - that simply does not exist inside public general purpose AI models and is unlikely ever to exist there in a complete or reliable way. So broker firms using our tools are not just protecting client data. They are getting outputs grounded in real mortgage infrastructure, not generic internet assumptions.

What brokers should be asking their providers

If data sovereignty matters to your firm, and it must, here are the questions every broker and broker firm should be putting to their AI or technology provider before signing, or at your next renewal conversation.

Is my client's data processed by your own AI, or by a third-party AI provider? This is the foundational question. If client data is being handed off to OpenAI, Google or Anthropic, everything else in the conversation must be tested against that reality.

Where, geographically, is my client's data processed? Get a specific answer. "The UK and EU" is acceptable. "Our infrastructure" is not. If the provider cannot name the jurisdictions their AI processes data in, your firm cannot meet its UK GDPR obligations.

Can you prove to me, in writing, that client data does not leave UK or EU jurisdiction? This needs to be contractual, not verbal. Ask for the specific clause. If it is not there, ask why.

Is my client's data used to train your AI model, or any third-party AI model, at any point? This is the single question most likely to expose a problem. The answer you need is an unambiguous "no", backed by contract.

What happens if there is a data breach at the underlying AI provider, not at your firm? Many vendors have a clear process for breaches inside their own organisation but are vague about what happens if the breach occurs upstream, at the AI model company they rely on. Your firm needs to know how, when and by whom it would be notified.

Can I see your AI provider sub-processor list, and is it kept current? Under UK GDPR, your firm is entitled to know who is processing your client's data on your behalf. A provider that cannot produce this list, or that updates it without notifying you, is a governance risk.

What is your process for deleting client data on request, and does that extend to any data that has been processed by a third-party AI? A right-to-erasure request is meaningless if the data has already been absorbed into someone else's model. Ask how your provider handles this, realistically.

The bottom line for brokers

Brokers sit at the point in the mortgage journey where the most sensitive client data is collected and acted upon. That comes with a governance responsibility that cannot be outsourced to a tech provider, however slick their user interface.

The broker firms who will come through the next few years strongest are the ones treating their clients' data as exactly what it is: a regulated asset that must stay under their control, in jurisdictions they understand, with providers they can hold directly accountable.

Intellectual Property is the second pillar of our AI Charter for a reason. Your client's data belongs to your client. Your firm's job is to make sure it stays that way.

More like this
CLOSE
Subscribe
to our newsletter

Join a community of over 30,000 intermediaries and keep up-to-date with industry news and upcoming events via our newsletter.