Let me say something that might be uncomfortable.

The financial sector is one of the most data rich industries on the planet. Banks, insurers, pension funds, fintechs. Decades of transaction data, risk data, claims data. More structured information than almost any other industry.

And yet, when I talk to leaders across financial services about their AI journey, the story I hear most often is not one of bold transformation. It is one of hesitation, stalled pilots and tools that nobody quite owns.

Over 85% of financial firms are actively applying AI in some form in 2025. AI spending in financial services is projected to reach $97 billion by 2027. The ambition and the investment are clearly there.

So why do so many organisations feel stuck? That is not a technology problem. It is a leadership problem.

Why financial services is different. And why that matters.

In most industries, a failed AI implementation means wasted budget and a frustrated team. In financial services, the stakes are higher.

An AI that hallucinates in a retail context is embarrassing. An AI that hallucinates while advising on financial products, assessing credit risk or handling a claims query is a trust catastrophe. The kind that takes years to recover from, if you recover at all.

This is why the financial sector moves carefully. And honestly, that instinct is not wrong. The problem is when careful becomes a reason to never fully commit.

The numbers tell a revealing story. By 2025, around 90% of insurers had begun evaluating or implementing AI. Analysts estimate that by 2026, about 80% of insurers will be deploying AI in at least one core function. Yet only a small minority of organisations have scaled meaningfully beyond pilots.

That gap between starting and scaling is where the real problem lives.

The use case that changed how I think about data.

A situation I came across shaped how I now talk about AI readiness with every organisation I work with.

A financial institution wanted to move fast on AI. They had the ambition, the budget and the executive support. What they did not have was clean, connected data. Records lived in multiple systems. There were duplicates, gaps, inconsistencies. The kind of data infrastructure that had grown organically over decades and had never been properly consolidated.

They faced a choice. Launch AI on top of what they had, or invest first in getting the foundation right.

They chose the foundation. It was not the exciting choice. But when they eventually deployed AI agents for interactions with their account holders, those agents worked. They did not make things up. They did not give conflicting information. They were trusted, because the data behind them was trustworthy.

This matters more than people realise. BCG research shows that organisations adopting AI with strong data foundations see up to 60% efficiency gains and 40% cost reductions in areas like onboarding, compliance and settlement. Without that foundation, you are not just leaving value on the table. You are actively building something fragile.

In financial services, trust is not a feature of your product. Trust is the product.

The use case that changed how I think about people.

There is a pattern I see repeatedly in financial services operations teams. Highly skilled people spending the majority of their time on questions they have already answered hundreds of times. Routine queries. Standard processes. Work that is necessary but does not require their full capability.

One organisation decided to ask a different question. Instead of hiring more people to handle the volume, they asked: what if we let AI handle everything routine, so our people can focus on everything that actually requires judgment?

The results were significant. A large share of routine queries moved to AI, handled around the clock. The human team shifted to complex situations, vulnerable account holders, cases where experience and empathy genuinely mattered.

This is not an isolated example. Research shows that AI has helped financial organisations reduce administrative tasks by up to 30%, and that AI-powered automation has cut service costs by 20 to 40% in some implementations. In insurance specifically, leading organisations are reporting reductions in manual claims processing effort of up to 80%.

What struck me most in that conversation was not the efficiency figure. It was what the people said afterwards. They felt their work meant more.

That is what good AI implementation looks like. Not replacing people. Giving them back the work worth doing.

The use case that changed how I think about governance.

Insurance is perhaps the most risk aware sector I work with. Decisions made on the basis of flawed models have real consequences for real people. Premiums, claims, coverage. These things matter.

What I find consistently is that the best AI implementations in insurance are not the ones that moved fastest. They are the ones that built governance first.

One insurer treated their AI governance policy the same way they treated any other major risk framework. They piloted it, tested it, learned from it and refined it before rolling out organisation wide. Role based data access. Clear accountability for every model in production. A defined process for when something goes wrong.

This is not excessive caution. It is competitive positioning.

The EU AI Act is now in force, with penalties for non compliance in high risk AI systems, including credit scoring, fraud detection and underwriting, of up to 6% of global annual turnover. The FCA has increased scrutiny of AI-driven risk models. A UK parliamentary committee recently urged regulators to move faster on AI accountability standards.

Organisations building governance frameworks now are not slowing themselves down. They are building something their competitors will spend years trying to catch up with.

There are three speeds of AI. Most organisations are only running one.

When I work with organisations on their AI strategy, I use a frame that tends to cut through quickly.

Personal AI is the entry point. Someone uses a tool to draft a report faster or summarise a document. Useful, but the organisation is not changing. One person is just a little less tired.

Process AI is where things start to get interesting. AI embedded in how a team works, how a workflow runs, how a decision gets made. In financial services, this looks like automated triage in claims, AI-assisted compliance checking, intelligent routing in operations. This is where real gains become visible across teams, not just individuals.

Enterprise AI is the destination most leaders say they want to reach. Agents running end-to-end journeys. AI involved in decisions that shape the organisation. IDC predicts adoption of agentic AI in financial services will triple in the next two years. Frontier Firms, organisations that embed AI agents across every workflow, already report returns on their AI investments roughly three times higher than slow adopters.

Here is what I see consistently. Organisations are operating at level one while their leadership is talking about level three. The tools exist. The ambition exists. But the bridge between them is not being built.

That bridge is not more technology. It is ownership, governance and the willingness to bring your people along.

An agent is not a product. It is a responsibility.

This is the sentence I find myself saying more and more.

When a bank or insurer deploys an agent, someone needs to own it. Not just technically. Someone needs to own what it does, what it says, what it decides and what happens when it gets it wrong.

And yet in most organisations, that person does not yet exist. The agent gets built, demonstrated to leadership and then quietly left. No one is watching the outcomes. No one is adjusting for drift. No one knows whose job it is when something goes wrong.

Currently only 63% of organisations are formally measuring AI return on investment. And those that do expect it to take an average of 28 months to realise. That means more than a third of organisations deploying AI have no clear measure of whether it is working. In a regulated industry, that is not just a business problem. It is a governance problem.

Building agents is becoming easier every month. Owning them responsibly is a different skill entirely. And it is the one that will separate the organisations that lead from the ones that are forever catching up.

The question I always ask.

At every organisation I work with, at some point I ask the same question.

Who is responsible for the outcome of your AI?

Not who bought the licence. Not who ran the pilot. Who is responsible for what the AI actually does, day after day, for the people it serves and the teams relying on it?

In the organisations making real progress, someone can answer that clearly. There is a name, a role, a mandate.

In the organisations that are stuck, there is a pause. A look around the room. And then usually something about it being a shared responsibility, which in practice means no one’s responsibility.

That is the moment where the real work begins.

Where is this going?

I am genuinely excited about what AI can do for financial services. Not because of the numbers alone, though 58% of financial institutions already directly attributing revenue growth to AI is hard to ignore. But because of what becomes possible when the fundamentals are right.

An organisation that can detect fraud faster, with accuracy exceeding 90% and saving billions annually across the industry. A claims team spending their time on the cases that truly require judgment, not buried in paperwork. A compliance function that catches risk earlier, not after the fact.

And beyond the organisation, there is something bigger. Financial services touches everyone. How companies and individuals manage money, how risk is priced, how financial health is supported. If AI can make those things more accurate, more accessible and more human, that is worth getting right.

The organisations that will lead are not the ones moving fastest. They are the ones moving with the most intention.

Get your data right. Own your agents. Bring your people along. Build governance that lasts.

That is not a slow strategy. That is the only strategy that works.