Brief
At a Glance
- The velocity of change in AI has made continuous transformation the norm.
- Leading firms combine hands-on engagement by the CEO and senior executive team with bold operating model redesign, strong data foundations, and disciplined experimentation.
- Scaled impact comes from pairing enterprise-wide experimentation with a clear strategic thesis and focused investment.
- The primary constraints are failure to redesign roles and the organizational structure, failure to scale AI governance, and data that doesn’t provide the right context for AI.
This article summarizes insights from Bain & Company’s Global AI in Financial Services Summit, held in April 2026 in San Francisco.
The release of new frontier large language models in late 2025 and early 2026 did not simply improve AI performance incrementally; it shifted the paradigm. Agents moved from concept to operational reality. Coding tools went from productivity aids to, as some AI leaders now describe them, enabling engineers to develop software up to 10 times faster.
The velocity is disorienting. Senior executives who had expected to defer major AI decisions for another year are realizing that their window for action may be measured in months. The key question now is which institutions will reshape themselves fast enough to lead rather than follow.
The power of top-down engagement
AI transformation requires intense, visible involvement from the CEO and senior executive team. At leading financial institutions, CEOs review top AI initiatives on a biweekly basis, dive into engineering details, personally unblock impediments, and hold dedicated AI sessions multiple times a week. The leaders who use AI tools themselves gain enormous credibility and pull their organizations forward, while those who delegate AI experimentation also delegate their authority to spur change.
But top-down energy alone is not enough. CEOs must provoke ambition while simultaneously removing anxiety about missing targets, giving teams permission to aim high without fear of failure. Without that combination, organizations fall into a self-reinforcing cycle of playing it safe.
Modernizing the entire organization
The hard part of AI transformation is scaling localized actions in functions such as customer service, claims, and software development into an end-to-end redesign of those processes or functions. To move from isolated wins to institutional change, many leading organizations follow a specific logic: Leadership ambition drives the business case and value commitment, which informs workflow transformation, which surfaces organizational implications, which finally determines technology changes. This is counterintuitive to organizations that focus on having the technology spur the change.
A recurring tension exists between the need to start small and the imperative to think at enterprise scale. The organizations that are furthest along start with domain-specific use cases, prove value, and then extract reusable building blocks into a platform. Critically, the initial use cases are selected against a strategic thesis about where the institution’s highest-value AI opportunities lie, typically at the intersection of high-volume workflows, rich proprietary data, and clear competitive differentiation. They let a thousand flowers bloom in how people learn and experiment but ruthlessly select which experiments graduate into scaled transformation programs.
Challenges with the human dimension
Three areas stand out as tough organizational challenges: middle management, motivation, and skills.
Middle managers often have the most to lose from AI transformation, because their coordination role becomes less necessary. The answer is not to eliminate these roles but to help middle managers evolve by becoming hands-on builders and player-coaches. Those who use AI to enhance their own productivity demonstrate that their domain knowledge remains valuable. But the organizational structure will flatten regardless.
To motivate people, one effective approach involves triggering concern and inspiration in sequence—first showing how competitors could disrupt each function, then showing how AI could reinvent it. Motivation must be tailored, as what moves software engineers differs from what moves operations staff. Here, it’s critical to articulate why people cannot stay where they are, coupled with a compelling vision of a better future and real options for the people affected, including some combination of reskilling, redeployment, or role redesign.
Education’s role changes as well. The age of AI puts a premium on hiring for learning velocity rather than narrow functional expertise and on teaching current employees new skills. New hires benefit from intensive boot camps designed to test flexibility and adaptability. They should be positioned as problem solvers rather than slotted into traditional roles, using AI rather than avoiding it.
Mastering the use of data
Data is increasingly the key enabler of AI value but also one of its biggest bottlenecks. The organizations that get the most from AI have the most usable, trusted, well-governed data. Much of the enterprise value sits in fragmented, messy, and context-rich data: contracts, e-mails, call transcripts, support tickets, engineering notes, and the like. The reasoning behind business decisions often resides in such unstructured data. To extract it reliably requires classification, metadata, permissions, lineage, deduplication, retrieval design, and continuous quality monitoring. Without that foundation, AI systems may retrieve stale or irrelevant context, miss critical nuance, or expose sensitive information.
At the same time, structured data is becoming a new pressure point. As agents become more capable, they will query databases, dashboards, warehouses, CRM systems, ERP systems, and operational platforms directly. That changes the demand profile dramatically. Query volumes can quickly dwarf what those systems were provisioned for, creating latency issues, degraded performance for core business users, and unexpectedly high costs.
Enterprises will need stronger controls around which agents can access which data, how often they can query systems, what cost guardrails apply, and when answers should come from cached summaries, semantic layers, replicas, or governed APIs instead of raw production systems. The winners will treat data architecture, governance, and cost management as core AI capabilities, not back-office plumbing.
A critical risk has also emerged around software-as-a-service vendors. The CIO should have a view on the future architecture and the role of SaaS applications—in particular, how critical systems of record will need to be updated for agentic AI. Yet some SaaS vendors are actively working to keep data siloed within their ecosystems, making it difficult for organizations to build cross-platform AI capabilities. Vendor incentives often favor keeping data siloed and charging more for the privilege of using vendor-native AI agents.
Governance shifts from meetings to code
Traditional risk frameworks, built around meetings, committees, manual reviews, and document-based approvals, cannot keep pace with AI deployment at scale. The solution taking shape consists of policy as code—that is, translating policies into executable code on the AI platform. The approach to accountability also needs to evolve in a way that makes frontline business and technology leaders accountable. They can be supported by innovations such as full-stack compliance developers—individuals with delegated authority across multiple compliance domains (privacy, risk, regulatory) who sit within AI teams and can approve go-to-market activities using AI-powered compliance tools.
Several practical governance patterns are proving effective. Risk-differentiated approval paths classify use cases by risk level, with fast-track preapproval for low-risk applications and deeper review for high-risk ones. Embedding risk specialists in agile teams from day one, rather than engaging risk partners at the end, keeps velocity high. By harvesting small use cases with big potential, letting low-risk experiments move fast, and then identifying the ones with broader potential and investing in scaling them, organizations can avoid the bottleneck of trying to govern everything through a single gate. In addition, as business and technology leaders gain a better understanding of the risks, it becomes feasible to reduce governance steps or change the risk appetite.
Agentic AI’s new wrinkle
The shift from generative AI to agentic AI—agents that take autonomous actions, not just generate content—creates challenges that current frameworks simply do not address.
Some financial institutions select patterns for their agents that are analogous to those of human employees, assigning them functional IDs, access controls, and management structures. Others adopt zero-trust per-transaction authentication for customer-facing agents. The deeper question remains unresolved: As agents become more capable and multiagent systems create chains of delegation, where does human accountability begin and end? The consensus is that it must always roll up to a human, but the frameworks for how to accomplish that are still being developed.
The context that agents operate within is critical. Financial institutions must develop capabilities to ensure that agents have access to good contextual data, short-term memory (within a task), and long-term memory (to improve task performance over time). Managing these different types of memory is a new governance challenge with no established playbook.
Treating agents as digital employees helps organizations scale from small pilots to enterprise deployment by reusing frameworks they already have, covering identity, access rights, ownership, supervision, escalation, monitoring, and auditability. What role does this agent perform? What data and systems can it access? Who is accountable for its actions? Who reviews exceptions? Agents become easier to understand as part of a hybrid human–digital workforce rather than opaque technology artifacts.
Turning to regulation, certain agencies in the Asia-Pacific region have been among the most proactive, cocreating model risk management guidance for agentic AI with industry participants. The requirements of the EU AI Act are phasing in over several years, and its definition of an AI system covers agentic systems. In the US, the most recent supervisory guidance from federal banking agencies explicitly placed agentic systems out of scope. So institutions in good standing with their regulators have a window of opportunity to help shape the rules rather than wait to react to them.
The competitive clock is ticking
Experimentation with AI is nonnegotiable—and it cannot be delegated. Every leader, from the CEO to the front line, must build personal fluency through direct use. When leaders engage visibly, they create credibility and unlock adoption across the organization. The impact compounds quickly: Small, individual breakthroughs scale into enterprise-wide transformation. But this only happens when organizations provide the tools, permission, and support to experiment broadly. Where experimentation is constrained or centralized, progress stalls and the gap relative to banks that are further ahead widens.
The urgency behind this is not theoretical. Leading investors predict that companies with widespread AI adoption will outperform the median company. A growing share of consumers now start their search and purchase journeys on AI platforms, with significantly higher conversion rates than traditional channels. Product-market fit gaps that used to take years to close are collapsing in months. The talent and infrastructure constraints that once governed the pace of adoption are being dismantled faster than most institutions have budgeted for.
At the same time, experimentation without direction or controls leads to fragmentation, fatigue, and risk. The institutions pulling ahead pair widespread AI adoption with a clear strategic thesis—focusing on a defined set of value pools and creating mechanisms to scale what works while stopping what doesn’t. AI is already reshaping competition, compressing timelines, and accelerating advantage for early movers. The greatest risk is not overinvesting but rather inaction or unfocused effort. Leaders who move with clarity and intent will build lasting advantage; those who wait will struggle to catch up.