Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
Regulators are exposing the UK public and the City of London to “potentially serious harm” with their “wait-and-see” approach to the mounting risks of AI in financial services, an influential group of MPs warned on Tuesday.
The House of Commons Treasury committee urged the Treasury, the Bank of England and the Financial Conduct Authority to be more “proactive” about the rising use of AI by more than three-quarters of financial services companies.
In a report, the cross-party committee called on the central bank and the financial watchdog to carry out a stress test to assess how the financial system would cope with a future “AI-driven market shock”.
“By taking a wait-and-see approach to AI in financial services, the three authorities are exposing consumers and the financial system to potentially serious harm,” it said. “Action is needed to ensure that [use] is done safely.”
MPs also urged the government to speed up its decision on which Big Tech groups should be subjected to direct supervision by financial regulators as critical suppliers of cloud computing or AI services to the City. In November, City minister Lucy Rigby said this would happen within a year.
The FCA should publish “practical guidance” on how existing rules applied to the use of AI and how the regulator would decide who to hold responsible when things go wrong, MPs said.
Dame Meg Hillier, Labour chair of the committee, said: “Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying.”
The BoE warned last month in its financial stability report that soaring valuations for tech companies focused on AI had pushed US stock market valuations to levels not seen since the dotcom bubble in 2000.
It also said an expected $2.5tn of debt funding for building data centres and other AI-related infrastructure “could increase financial stability risks”.
But UK regulators have taken a hands-off approach to supervising AI usage by financial services companies, even as the fast-evolving technology transforms many of their operations, including fraud detection, customer service, algorithmic trading, investment advice and corporate research.
FCA chief executive Nikhil Rathi told the FT Global Banking Summit last month that there needed to be a “different relationship between regulator and regulated” to avoid stifling the use of new technology such as AI. “I’m not sure what detailed rules around AI would look like,” he said.
Regulators are under pressure from ministers and City executives to ease the burden of rules, improve UK competitiveness and boost economic growth.
The Treasury committee said in its report that “AI offers important benefits, including faster services for consumers and new cyber defences for financial stability”. But it also warned of “significant risks to consumers and financial stability, which could reverse any potential gains”.
The drawbacks identified by MPs included a lack of transparency in AI-driven decisions on the availability of loans or insurance, the risk of discrimination against the most disadvantaged consumers, misleading advice from chatbots, and new types of fraud.
The Treasury said it had been working with regulators to “strengthen our approach as the technology evolves” and had appointed Harriet Rees, chief information officer at Starling Bank, and Rohit Dhawan, head of AI and advanced analytics at Lloyds Banking Group, as AI champions for financial services”.
“We will strike the right balance between managing the risks posed by AI and unlocking its huge potential,” it added.
The BoE and FCA both welcomed the report and plan to respond to it fully later this year. The central bank said it had “taken active steps to assess AI related risks and reinforce the resilience of the financial system”.
The FCA, which recently launched an AI live testing service to help companies experiment with the technology, said it had done “extensive work to ensure firms are able to use AI in a safe and responsible way”.


