Is trust the banks’ hidden advantage?

Tech giants win on speed, but safety is banks’ strong suit
Home  // . //  //  Is trust the banks’ hidden advantage?
Abstract illustration with the letters to form the word "Trust"

Artificial intelligence has gone from novelty to daily consumer habit in a very short time. But when the topic turns to finance, many hesitate to let AI do more than offer suggestions. Some 42% of consumers say they wouldn’t trust an AI agent to execute transactions on their behalf, according to our recent survey, even though many routinely use AI for search and summarization tasks. The disconnect points to a hidden advantage in the AI race that financial institutions enjoy over the hyperscalers: trust.

As consumers embrace the new platforms, agents, and embedded journeys that increasingly define the AI economy, trust could be the anchor that helps financial institutions increase relevance.

While the AI hyperscalers and foundation model builders look to capture customer relationships and engagement from incumbent financial services institutions, customers still turn to banks as the safest places to hold sensitive data, make important financial decisions, and get problems fixed when things go wrong. If banks implement AI thoughtfully, that reservoir of trust could become a moat around the customer relationship.

AI speed versus reliability — the divide between tech and banks

While hyperscalers and financial institutions are increasingly finding themselves in more direct competition, they operate differently — and from a risk standpoint, that’s a good thing.

Frontier AI models are amazingly fast and fairly accurate in short bursts. In one benchmark, a leading model could read a paragraph of an annual report and reason about it with over 90% reliability, roughly on par with a human expert.

Yet when asked to reason across the entire report, accuracy fell to about 55%, closer to someone who skimmed only the executive summary. The same pattern shows up in how models handle instructions: They sometimes return something the user did not ask for, insist it matches the request, and later contradict themselves.

Confronted with this mix of power and unpredictability, different players optimize in different ways. AI developers and technology firms like OpenAI and Google ship new capabilities quickly and test them in the market. Their business models reward reach, engagement, and rapid iteration more than perfect reliability.

Banks and regulated financial institutions, in contrast, are required to be predictable and consistent. Error tolerance in core processes like payments and lending is close to zero, and new systems must be tested thoroughly before deployment. When things go wrong, banks answer not only to customers but also to regulators, auditors, and sometimes news reporters. That's why many are progressing cautiously in moving AI from internal processes — using it to drive productivity and risk improvements — to direct client-facing interactions. As banks do this, they are keeping humans in the loop to ensure appropriate controls are in place and enable predictability.

Why trust gives banks an edge over hyperscalers in AI

Banks will probably never beat the hyperscalers and leading tech-driven e-commerce players on speed or flashy front ends. But from a customer lens, there are three key elements of trust they can leverage to win where it matters most.

Customer lens: “I trust you to safeguard and activate my information”

Our research finds that customers view banks as the most trusted institutions to safeguard personal data, ahead of healthcare, government, tech platforms, AI developers, and social media. What’s more, 85% of customers say they would be willing to share even more data with their bank if the AI value proposition were clear.

Exhibit 1: Banks remain the most trusted custodians of data
Source: 2025 Oliver Wyman Consumer Survey on AI, Oliver Wyman analysis 

Banks are used to managing consent, keeping records, and following strict rules on how data is stored and used. This gives them permission to build a more complete picture of a customer’s financial life than most other players can assemble from the outside, including AI services such as cashflow monitoring and alerts, early warning signals around distress, and longterm planning that links spending, debt, savings, and tax. Trusted data custody is what allows financial institutions to plug AI into more consequential parts of the balance sheet earlier than others.

Customer lens: “I trust you to be accurate and accountable”

Banks already operate in a system in which accountability is non-negotiable — both to regulators and to customers. A broad ecosystem of controls and governance routines helps banks reliably serve customers, providing appropriate recommendations informed by the data they entrust to banks. When things go wrong, accountability is clear.

This reliability matches what customers want from AI: comfort that a bank’s AI agents accurately and reliably interpret and execute requests. More than 93% of those we surveyed believe outcomes from AI chatbots must be verified.

If banks design their AI services to show how a request was interpreted, explain what checks were run, make clear when human review is involved, and state who stands behind the outcome, they can reduce the fear that the system will quietly do the wrong thing or hide its errors. That makes it easier for customers, boards, and supervisors to accept AI in higher-stakes roles.

Customer lens: “I trust you to understand my intent and act on my behalf”

Banks have deep experience in life-changing financial decisions. They know how to help customers buy homes, restructure debt, support businesses in stress, and manage risk for institutions over long horizons. This isn’t just about following rules — it’s about understanding the customer’s true objectives and knowing what good options look like, which tradeoffs to highlight, and when a human conversation is needed. Banks have the customer data and insights needed to establish this understanding.

As large language models battle it out to improve performance, the ability to make accurate inferences becomes a key differentiator. Institutions that can encode their know-how into AI-enabled journeys — and still stand behind the results — will be able to move faster into serious uses of AI than those starting from scratch.

Why banks must move fast to defend their trust edge in the AI era

While banks have the benefit of stronger trust with customers today, the landscape is changing rapidly, from the rising threat of AI and technology firms to challenges from existing financial services players that build trust in executing transactions outside the banking system (for example, trading platforms, money transfer firms, and nonbank lenders). This leaves no room for complacency. If banks fail to act strategically to harness trust as a core asset, this advantage will steadily erode, allowing competitors and new entrants to close the gap.

We see a number of developments that could impact banks’ trust advantage one way or the other as the AI economy takes shape.

Exhibit 2: Factors that could erode or reinforce banks' trust advantage

Customer adoption of AI in financial services isn’t likely to follow a single, smooth line. AI developers, big tech firms, and platforms will continue rapidly building out their capabilities to try to capture the customer interface and embed themselves into more customer flows.

For high-stakes, high-trust financial activities, however, customers will go where they already believe their data is safe, someone is checking the system’s work, and the institution knows what it is doing when it really counts.

For now, that still describes established financial institutions. Trust, in that sense, isn’t a brake on AI. It is the reason banks can move more confidently into the parts of AI that touch the core of customers’ financial lives — and the anchor that keeps them central as the ecosystem around them changes.

Trust

For deeper insights into whether trust is the hidden advantage for banks — or to explore how your organization can respond — contact our Financial Services Partner, Mariya Rosberg.

This article is part of our Known Unknowns report, highlighting the debates that will shape the future of financial services in the age of AI.