Financial services have reached a remarkable AI adoption milestone. Today, 75% of firms use artificial intelligence, and 10% more plan to implement it in the next three years. Foundation models now power 17% of all AI use cases, which shows how deeply AI has revolutionized financial operations.
North American institutions lead this technological revolution with major investments in AI banking solutions. The benefits of artificial intelligence shine brightest in data analytics, anti-money laundering efforts, fraud prevention, and cybersecurity. The future looks promising as experts predict a 21% increase in average benefits over the next three years, while risks should only rise by 9%. Cybersecurity stands out as the main systemic concern for AI in fintech, both now and in the future.
AI adoption in UK financial services
The UK's financial sector leads the global AI revolution. AI adoption has reached unprecedented levels in financial subsectors of all types. Traditional financial operations are changing rapidly as new chances for state-of-the-art efficiency emerge.
Current adoption rates across sectors
UK's financial services have welcomed artificial intelligence at remarkable speeds. Recent data from the Bank of England and Financial Conduct Authority shows that 75% of firms are already using AI. Another 10% plan to implement it over the next three years. This widespread acceptance points to a fundamental change in how financial institutions work and serve their customers.
The insurance industry tops the chart with an impressive 95% adoption rate. International banks follow closely at 94%. Financial market infrastructure firms have the lowest rate at 57% across all financial subsectors. These numbers show how unevenly AI implementation spreads across financial services. The pattern often matches each subsector's operational complexity and customer service models.
The UK government recognizes financial services firms as "leaders in AI adoption". Sarah Breeden, the Deputy Governor of the Bank of England, confirmed these high adoption figures in October 2024.
Growth since 2022 and future projections
AI momentum in financial services keeps building. Only 58% of firms used AI in 2022, with 14% planning to adopt it. Today's numbers show how much more confident financial institutions have become in AI's abilities.
The financial future of AI looks promising. The overall AI market in finance is projected to grow from £28.93 billion in 2024 to a staggering £143.56 billion by 2030. This represents a compound annual growth rate of 30.6%. Generative AI, which creates content, automates tasks, and synthesizes data, should grow even faster. Experts expect it to jump from £1.67 billion to £12.10 billion in the same period, with a CAGR of 39.1%.
The Global City's analysis suggests AI could add about £35 billion to the financial and professional services sector over the next five years. AI productivity gains should reach 12% by 2025 and climb to 50% by 2030.
Lloyds Financial Institutions Sentiment Survey backs this upward trend. About 59% of institutions report better productivity through AI implementation, up from 32% last year. As a result, 51% of institutions plan to invest more in AI next year.
Sector-wise breakdown of AI usage
AI applications vary greatly across financial sectors in both scale and purpose. Large UK and international banks use AI the most, with medians of 39 and 49 cases respectively. These numbers are much higher than the industry median of 9 use cases. Most firms (56%) currently run 10 or fewer use cases, but this should change dramatically within three years.
Operations and IT departments lead AI implementation with about 22% of all reported use cases. Retail banking follows at 11%, while general insurance accounts for 10%.
Foundation models have quickly become popular, making up 17% of all AI use cases. Operations and IT dominate this specialized category too, with around 30% of all foundation model applications.
Financial leaders increasingly see AI's potential. About 91% now view the technology as a chance rather than a threat—up from 80% last year. Despite this optimism, only 29% of financial institutions report meaningful cost savings from AI. This suggests many initiatives still need to improve their operational efficiency.
Key business areas using AI
AI has moved beyond theory and become a core part of business functions in the financial world. Financial institutions now use AI in many areas to create measurable value and solve long-standing industry problems.
Operations and IT
Operations and IT make up the biggest share of AI applications, with about 22% of all reported use cases. This number doubles any other business area, showing how companies prioritize internal efficiency. The numbers prove this trend - 41% of financial institutions use AI to optimize internal processes, the highest rate among all business functions.
Companies focus on operational improvements because AI makes time-consuming tasks more efficient. Financial institutions use advanced analytics to spot workflow bottlenecks and suggest automation opportunities that cut down manual work. These tools study transaction patterns and flag cases needing human attention while processing routine tasks automatically. On top of that, process-mining tools show every step in financial workflows to eliminate unnecessary steps.
The future looks promising - 31% more firms plan to use AI for operational optimization in the next three years, showing the continuing move toward smart automation in financial operations.
Retail banking and insurance
Retail banking ranks second in AI implementation with 11% of all use cases, and general insurance follows at 10%. AI revolutionizes these customer-facing areas through personalized service and better risk assessment.
Insurance companies' machine learning models offer custom premiums, evaluate risk profiles, and handle claims faster. On top of that, AI's growing role in environmental, social, and governance (ESG) investing marks a key trend, as financial institutions analyze sustainability factors for investment choices.
AI algorithms have transformed risk management from a manual, time-heavy task. These tools assess credit risk, predict loan defaults, and track market volatility. The models give predictive insights that help institutions prepare for market changes.
Fraud detection and cybersecurity
Cybersecurity and fraud detection stand out as AI's most established uses in finance - 37% and 33% of firms use these technologies. The numbers will grow, as 31% more firms plan to add AI-powered fraud detection within three years.
Banks use machine learning models to spot suspicious transaction patterns, with major banks using these systems to find "dark patterns" that might signal fraud. The results are impressive - American Express's fraud detection improved by 6% with advanced AI models, while PayPal's real-time fraud detection got 10% better.
HSBC shows AI's power by checking about 900 million transactions monthly across 40 million customer accounts. Their system achieved remarkable results:
- Detected 2-4 times more financial crime than before
- Cut false positives by 60%
- Reduced processing time from weeks to days
AI systems protect against cyber threats by finding network anomalies, spotting new fraud patterns, and fixing vulnerabilities. These tools analyze huge datasets to find hidden patterns and speed up response times.
Customer service and chatbots
Customer service has become AI's next frontier. The next three years will see 36% of financial institutions using AI for customer support - making it the fastest-growing area.
AI chatbots give instant, round-the-clock answers to customer questions while creating personal connections. A major international financial company's AI chatbot handles 500,000 customer conversations yearly, solving over half without human help. This saves about €2 million yearly, with only 6% of chats needing human attention.
NatWest Bank's AI assistant "Cora" proves this success. Customer satisfaction jumped 150% after launch, while the need for human support dropped. These clear improvements explain why financial services keep expanding their customer-facing AI tools.
Types of AI models in use
Financial institutions now rely more on AI technologies to power their digital change. The models they use set the boundaries for what's possible and what isn't. Clear priorities have emerged throughout the sector.
Gradient boosting and decision trees
AI in financial services runs largely on gradient boosting models. These models make up 32% of all reported use cases. Their popularity comes from excellent performance with structured financial data and tabular datasets.
Gradient Boosting Machines (GBM) work as ensemble learning algorithms that combine multiple decision trees one after another. This creates more accurate predictions. The system starts with a basic decision tree to make its original predictions. The algorithm then looks at mistakes from this first model and trains new trees to fix these errors. Each new round makes the model more accurate by focusing on fixing specific mistakes.
Decision trees shine in financial applications because people can understand how they work. This applies both to standalone trees and those used in gradient boosting groups. These models create clear paths based on specific financial variables. This makes them valuable when regulations require transparency.
Gradient boosted regression trees work well in quantitative finance to learn pricing maps for financial derivatives. These trained models can predict prices fast while staying accurate enough. Yes, it is possible to make complex financial calculations several magnitudes faster with this approach.
These models look at past patterns to predict possible losses and spot rising risk indicators. Banks also use them a lot in credit scoring to predict who might default by studying how borrowers behaved before.
Transformer-based and foundation models
Transformer-based models have grown faster in popularity and now make up 10% of all AI use cases in financial services. These models started in natural language processing but now serve many purposes in banking and finance.
The transformer architecture came out in 2017 and brought a fundamental change to language modeling. This state-of-the-art development laid groundwork for Large Language Models (LLMs) that gave financial institutions new ways to analyze data. Transformer models come in three types: encoders, decoders, and sequence-to-sequence. Each type serves different financial needs.
Foundation models trained on big datasets with little human input now cover 17% of all AI use cases in the sector. These models provide the knowledge base that powers generative AI applications. Operations and IT teams lead the way in using foundation models, making up about 30% of all such use cases. Legal teams show interesting numbers too - foundation models make up 29% of all AI models they use.
Popular LLMs like Llama, Gemini, and GPT learn from internet text as decoder models. These powerful tools sometimes make mistakes when dealing with specific financial topics. Financial institutions often use Retrieval Augmented Generation (RAG) to solve this issue by backing up model responses with verified facts.
Credit risk assessment has also started using transformer-based models. Recent studies show these models can predict credit card defaults better than traditional gradient boosting in some cases by finding hidden patterns in time-based data.
Third-party and proprietary models
Banks and financial firms now use external AI providers more instead of building everything themselves. The data shows that the top three third-party providers of models make up 44% of all named providers—much higher than 18% in 2022.
This trend shows up in cloud services too, where the top three providers make up 73% of all named providers, though this number has dropped from previous surveys. The top three data providers now represent 33% of all named providers, up from 25% in 2022.
Some financial institutions create their own specialized models to handle regulations. The Prudential Regulation Embeddings Transformer (PRET) shows how they try to boost information retrieval from financial regulations. This model is different from general financial models because it learns specifically from regulatory texts to understand compliance requirements better.
As AI grows faster in finance, institutions must balance easy-to-use third-party solutions against the benefits of building their own tools. They need strong governance and risk management no matter which path they choose.
Automated decision-making in financial services
AI is reshaping how financial services make decisions. Systems now range from simple automation to sophisticated autonomous agents. Financial institutions must balance increased efficiency with accountability and regulatory needs.
Semi-autonomous vs fully autonomous systems
Today's ecosystem shows 55% of all AI use cases have automated decision-making capabilities. This covers much of AI applications in the financial sector, though autonomy levels differ. 24% of these systems are semi-autonomous - they make independent decisions but need human oversight for complex or unclear situations.
Very few systems operate fully autonomously - only 2% of all AI use cases in financial services. These systems work without human input and change traditional approval processes. Decision automation combines AI, data analysis, and business rules to boost efficiency, cut errors, and deliver consistent results.
The digital world is moving faster toward advanced autonomous systems. Agentic AI leads this change - unlike generative AI that responds to prompts, these systems learn and act with minimal human input. They can arrange multiple agents that use large language models as a shared brain to solve complex problems on their own.
Human oversight in critical decisions
Financial services involve high stakes, so humans must oversee complex decisions. Semi-autonomous systems dominate the market because they include human oversight for critical choices. This human-in-the-loop model is vital in financial regulation before AI can make autonomous decisions.
Companies implement oversight differently. Human experts check AI-generated product recommendations to ensure they follow regulations. Similarly, human staff handle complex issues when chatbots can't resolve customer questions.
This oversight serves many purposes: it catches errors, spots bias, follows ethical guidelines, and makes decisions transparent for regulators. Without proper monitoring, AI tools might create new problems instead of solving existing ones.
Humans play a key role in understanding complex situations, making judgment calls, ensuring accuracy, and taking responsibility. AI results, especially from generative AI, sometimes produce incorrect information that needs human verification.
The Bank of England expects banks and insurers to use more AI in core business decisions about credit and insurance. This could lead to better products and more accurate risk management. However, advanced AI models might take financial risks that nobody understands if deployed without proper testing and controls.
Industry experts agree on a hybrid approach: AI handles analysis while humans provide judgment, oversight, and personal interaction. One expert puts it well: "Finance professionals today are not just adopting AI—they're being called upon to ensure its ethical and effective use".
Third-party AI implementations and dependencies
Financial services' dependency landscape has changed dramatically. Financial institutions now rely heavily on external providers for their AI capabilities. This interdependence creates opportunities and vulnerabilities that reshape how the sector develops and employs AI solutions.
Extent of third-party usage
External AI implementations have gained substantial traction in the financial ecosystem. A third of all AI use cases are now third-party implementations. This number shows a dramatic rise from 17% in 2022. The surge reflects AI models' growing complexity and lower outsourcing costs.
Some business areas show particularly high external dependency rates. Human resources tops the list with 65% of AI implementations coming from third parties. Risk and compliance follows closely at 64%. Operations and IT remains a hub for in-house AI development, yet sources 56% of its AI solutions externally.
The trend creates a complex web of interdependencies throughout the financial ecosystem. Financial institutions' expanding AI footprints intensify their reliance on specialized external providers. This pattern shows up most clearly in areas that need specialized expertise or substantial computational resources.
Top providers for cloud, models, and data
Market power concentrates among a handful of providers in the current landscape. The top three providers account for 73% of all named providers in cloud services. This reflects major tech companies' dominance in providing AI financial services infrastructure.
The numbers appear even more striking for model providers. The top three model providers represent 44% of all reported providers—a dramatic jump from 18% in 2022. Financial institutions seem to converge around specific AI frameworks and technologies. The top three data providers constitute 33% of all named providers, up from 25% in 2022.
Specific implementations highlight this dependence clearly. Starling Bank built its Spending Intelligence tool using "Google's Gemini models, running on the secure and scalable Google Cloud platform". Lloyds Banking Group teamed up with Google Cloud to speed up their AI innovation.
Aveni launched FinLLM—a large language model specialized for UK financial services. Major institutions including Lloyds Banking Group and Nationwide supported its development.
Risks of third-party reliance
External AI providers' growing influence introduces considerable vulnerabilities to the financial sector. Third-party dependency ranks among risks expected to grow substantially over the next three years. Concerns about model complexity and embedded "hidden" models add to these worries.
This concentration creates potential single points of failure from a systemic view. Customer-facing functions' heavy reliance on vendor-provided AI models raises concerns. "A widespread outage of one or several key models could leave many firms unable to deliver vital services such as time-critical payments".
The Financial Stability Board identified "third-party dependencies and service provider concentration" as a main AI-related vulnerability that could increase systemic risk. CrowdStrike's July 2024 worldwide IT outage proved this concern valid. The incident showed how operational disruptions at key service providers can ripple throughout the financial system.
Regulatory frameworks evolve to tackle these risks. The Financial Services and Markets Act 2023 created a new regulatory regime for critical third parties. Public-private collaboration helps develop a "shared responsibility model" for AI implementations. Data and AI model providers might soon need specific regulatory oversight as critical third parties.
Governance, accountability and explainability
Strong governance structures support responsible AI use in financial services. These structures ensure regulatory compliance and ethical implementation. Financial institutions now see governance as more than just compliance—it's a strategic necessity for safe AI adoption.
AI-specific governance frameworks
Financial institutions take various approaches to AI governance. 84% of firms currently using AI have designated people accountable for their AI framework. 82% of firms have created AI frameworks, principles, or guidelines. Data governance also plays a key role, with 79% of organizations implementing it.
The UK government uses a cross-sector, outcome-based framework to regulate AI. This framework rests on five core principles: safety, security, appropriate transparency, fairness, and accountability. Instead of creating a new AI regulator, existing bodies like the Information Commissioner's Office (ICO), Ofcom, and the FCA apply these principles in their areas.
The Financial Conduct Authority requires firms to have "robust governance arrangements, which include a clear organizational structure with well defined, transparent and consistent lines of responsibility". Good AI governance improves efficiency and encourages innovation while managing associated risks.
Executive accountability and oversight
Most institutions place AI governance responsibility with their executive leadership. 72% of firms make their executives accountable for AI use cases and outputs. Developers and data science teams handle responsibility in 64% of cases, while business area users manage 57%.
The Senior Managers and Certification Regime (SM&CR) establishes clear accountability for AI usage. Technology systems usually fall under the Chief Operations function (SMF24) in PRA-authorized and FCA-authorized Enhanced SM&CR firms. The Chief Risk function (SMF4) manages overall risk controls. A firm's board must review and approve an assessment of customer outcomes at least yearly.
Explainability methods like SHAP and feature importance
81% of firms using AI today use some form of explainability method. Feature importance leads as the most common technique (72% of firms), with Shapley additive explanations following closely (64% of firms). These methods show how input variables shape machine learning model predictions.
SHAP (SHapley Additive exPlanations) has gained popularity for explaining AI decisions. This game theory-based approach shows how different variables contribute to predictions. Companies can tell customers exactly why decisions were made. A typical explanation might be: "Your premium was influenced primarily by your driving history (40% impact), vehicle type (30% impact) and location (20% impact)".
These methods have their limits. SHAP lacks portfolio invariance, which means prediction explanations depend on portfolio distribution. Customer explanations might change over time even when their characteristics stay the same. Financial institutions must keep improving their explainability approaches to build trust, meet regulations, and maintain ethical AI practices.
Benefits and risks of AI in 2025
The financial landscape continues to evolve faster as we approach the end of 2025. Financial institutions must carefully balance AI adoption benefits against new risks. The sector guides this balance with growing expertise.
Top benefits: analytics, AML, cybersecurity
Banks and financial institutions see their biggest advantages in three areas: data analytics, anti-money laundering (AML) efforts, and better cybersecurity capabilities. AI-powered fraud detection systems could help banks save GBP 7.94 billion each year. These systems analyze transactions and flag suspicious activities in real-time.
AI boosts cybersecurity operations through quick pattern recognition and anomaly detection. This allows institutions to counter threats at machine speed. The systems detect subtle network irregularities, spot evolving fraud patterns, identify deepfakes in KYC evasion attempts, and highlight AI-generated phishing scams.
Top risks: data privacy, model complexity, third-party risk
Data-related issues make up four of the five biggest current risks: privacy and protection, quality problems, security concerns, and dataset bias. The next three years will likely see increased risks from third-party dependencies, model complexity, and embedded or "hidden" models.
Complex AI models create new challenges for explanation and interpretation. This becomes crucial in regulated environments where transparency is required. Organizations also struggle with regulatory compliance. Regulators plan tougher enforcement through higher fines as they focus on AI-related data practices.
Systemic risks and future concerns
Cybersecurity remains the biggest systemic risk now and for the next three years. Critical third-party dependencies show the highest expected increase in systemic risk.
Financial institutions face a catch-22: while they use AI for defense, criminals utilize these same technologies for sophisticated attacks. A major cyber incident at a popular AI provider could affect many financial institutions at once. The severity would depend on how critical the affected AI services are.
Regulatory and non-regulatory constraints
Financial institutions still face major barriers as they deploy AI solutions, even with faster implementation. These real-world challenges shape how AI works in financial services, beyond theoretical possibilities.
Data protection and privacy rules
Data protection and privacy regulations create the most important regulatory challenge. Research shows 23% of firms see it as a large constraint and 29% view it as a medium constraint. The UK General Data Protection Act 2018 and UK GDPR create specific challenges. Companies must think about how their AI systems handle personal data while following fairness principles. The biggest problem remains high regulatory burden, with 33% of firms highlighting this issue for data protection requirements.
Talent shortages and explainability challenges
The lack of skilled people creates a critical bottleneck beyond regulatory frameworks. Statistics show 25% of firms call it a large constraint and 32% see it as a medium barrier. Making AI systems transparent and explainable adds another major hurdle (16% large constraint, 38% medium). The talent gap becomes more worrying since only 9% of UK financial services executives believe their companies are ready for upcoming AI regulations.
Lack of regulatory clarity
Companies struggle with unclear existing regulations. About 18% of organizations point to unclear rules around intellectual property rights, and 13% mention confusion about FCA's Consumer Duty. The FCA understands companies need clear rules but also knows that rushing could accidentally limit state-of-the-art solutions.
Conclusion
The UK financial services sector has reached a turning point in AI adoption. Insurance and international banking lead the charge with an impressive 75% adoption rate across the sector. This technology's influence on traditional financial operations continues to grow. The market value is expected to surge from £28.93 billion to £143.56 billion by 2030, proving AI has evolved well past its experimental phase.
The benefits currently overshadow the risks, yet financial institutions must carefully balance their AI enthusiasm with sound governance. Data analytics, anti-money laundering efforts, and cybersecurity show the greatest potential advantages. Data privacy issues, complex models, and dependence on third parties pose major challenges that need immediate attention.
Gradient boosting models currently rule the AI space and handle 32% of use cases. Foundation models have started to gain ground, especially when dealing with operations, IT, and legal work. This move toward advanced AI technologies needs strong governance frameworks. Most firms recognize this need and have set up specific accountability structures.
Third-party providers offer both possibilities and risks. Top providers' dominance in cloud services (73%), models (44%), and data (33%) raises concerns about systemic risk. Recent widespread IT outages showed why financial institutions must create backup plans for possible disruptions.
Data protection and privacy rules continue to guide how AI gets implemented. Finding skilled talent and explaining AI decisions make adoption harder. Yet the financial services sector remains focused on growing AI capabilities while developing proper risk controls.
AI will reshape UK financial services through 2025 and beyond. Financial institutions that can handle regulatory requirements, find talented staff, and create solid governance will lead the industry. The question is no longer about adopting AI but implementing it responsibly and effectively.