FCA publishes an update on Artificial Intelligence (AI) in financial services
On 22 April 2024, the FCA published an update on its approach to Artificial Intelligence (AI) in financial services. The update outlines how AI relates to the FCA’s role and objectives, work done thus far, its existing approach and how FCA will approach this area over the next 12 months. This follows on from the UK government’s pro-innovation AI regulation White Paper, published in February 2024.
The update includes a foreword by Jessica Rusu, the FCA’s Chief Data, Information and Intelligence Officer (a new role which leads on AI governance, amongst other technology areas) in which she describes the FCA as a “technology-agnostic, principles based and outcomes-focused regulator”. Jessica goes on to outline how the FCA aims to foster a framework for AI that is safe, responsible, and innovation-friendly, in conjunction with the government, the Bank of England and other domestic and international stakeholders.
Work Done So Far
The FCA considers the role of technology (including AI) as important to its role as a regulator in the continued success and competitiveness of UK financial services markets and their contribution to the UK economy.
Towards establishing its approach to AI within UK financial services, the FCA (in collaboration with the Bank of England) has published several key documents in recent years. These include the October 2022 AI and Machine Learning Discussion Paper, the 2023 Feedback Statement on this, the AI Public-Private Forum (AIPPF) Final Report (2022) as well as the 2019 and 2022 machine learning surveys. The FCA has also collaborated with other regulatory bodies in this area such as the ICO, CMA and Ofcom.
Key to outlining the FCA’s intended approach to this area is the 2022 Discussion Paper, which weighs up the benefits and risks of AI in financial services in how it will affect its statutory objectives (consumer protection, competition and market integrity). The DP considers the sufficiency of existing regulations accommodating AI within the regulatory perimeter, as well as approaches the FCA may take in future within its regulatory remit to ensure the risks of AI are duly mitigated against its operational objectives.
Existing Approach
The approach established by the FCA so far aims to promote the safe and responsible use of AI in UK financial markets while driving beneficial innovation. Rather than mandating or prohibiting specific technologies, the FCA aims to identify and mitigate risks associated with AI, including ensuring that regulatory interventions are proportionate to their expected benefits and adopt an outcomes-focused approach that allows for flexibility and innovation.
The FCA highlights the UK Government's five principles for regulating AI:
Safety, security, robustness
Appropriate transparency and explainability
Fairness
Accountability and governance
Contestability and redress
The FCA's regulatory framework aims to align with these principles as part of its regulatory approach towards overseeing the use of AI systems by financial firms. The FCA also notes that it utilises AI tools to enhance market surveillance, detect scams, and support innovation in financial services. The FCA is actively building its data and technology capabilities, including recruiting technical experts and utilising emerging technologies like AI and quantum computing to stay at the forefront of financial regulation.
Plans for the Future
The FCA's plan for the next 12 months is focused on deepening its understanding of AI deployment in UK financial markets while ensuring regulatory adaptations are proportionate and supportive of innovation. It will engage in diagnostic work, collaborating with other regulators, and monitor international developments to inform its approach. The FCA intends to focus on testing the benefits of AI through pilot programs and exploring new regulatory engagement methods in this area.
The FCA wants to be at the forefront of AI developments both in its performance as a regulator and in ensuring these developments are integrated responsibility into UK financial markets. It wants to achieve this while also ensuring the UK remains a key hub for fostering innovation, which it states it supports through initiatives through its Innovation Hub such as the Regulatory Sandbox and TechSprints. The goal is to ensure the safe and responsible deployment of AI in UK financial markets for the benefit of consumers and market integrity.
The update is well worth a read if you’re interested in this area, as are the previous key documents linked above if you would like to understand more about this fast-developing topic.
If you would like to discuss, please get in touch.