The first panel of the afternoon covered three of the main talking points in AI: data, culture and skills.
Editorial
This content was selected, created and edited by the Finextra editorial team based on its relevance and interest to our community.
The panel entitled “What are the solutions to our limitations” was moderated by Gary Wright of Finextra. Speakers included James Benford, executive director and chief data officer of the Bank of England; Kshitija Joshi, Ph.D, Vice President (Data & AI Solutions), Chief Data Office, Nomura International; Kerstin Mathias, Director of Policy and Innovation, City of London; and Ed Towers, Head of Advanced Analytics and Data Science Units, Financial Conduct Authority.
The discussion began with what each panelist is currently working on in their organization when it comes to AI. Towers spoke about the FCA’s recently published survey, saying it found that currently 75% of financial organizations are already using some form of AI, with 17% already using generative AI. The majority of use cases identified were in low materiality areas, with high adoption observed in financial crime prevention and back office.
Benford explained that at the Bank of England there has been an advanced analytics division for around 10 years, working on around 100 projects with what we consider traditional AI. Specific examples include machine learning for policymaking or studying the impact of unemployment on inflation. He added that an AI working group was established last year and has begun to deploy generative AI more broadly, particularly to transform existing code.
Joshi explained that she was the first person hired when Nomura International created its centralized data science team three years ago. The idea was to ensure there was a centralized team capable of overseeing data governance and management principles. “All AI is based on the assumption that the underlying data is of good quality. In reality, that’s not really true, and we all understand that in financial services.” So the question was, how do we test for toxicity, bias and hallucinations – and do it at scale?
From Joshi’s perspective, she observed two different periods when it came to AI adoption: before the ChatGPT deployment and after the ChatGPT deployment. “Before, if you talked to stakeholders – not even about AI, but about deploying data analytics – it was a big no. After ChatGPT, everyone wanted to use it and do something – anything – with AI.
The conversation then turned to education, as Joshi explained how their team ensured the proper training and education was in place at Nomura International. Mathias highlighted that training was also a crucial priority at the City of London. “Our main objective is for London to remain a leading financial centre,” she explained. “And AI helps with that.”
“We look at it in three categories. The first concerns internal policies, the second investments and the third skills. Over the past 24 months, job postings seeking Generative AI and Conversational AI skills have increased 150x. It’s impossible to eliminate these skills simply by waiting for people to come through the pipeline. So upskilling and reskilling existing workers is crucial – and your data systems, as well as your existing system, are part of that.
When it comes to managing risk in AI models, Benford emphasized the need to create robust risk models and frameworks. “We set out to look at all internal policies, determine how to allocate resources and focus on the low-hanging fruit,” he explained.
“Traceability back to the source document is an important safeguard. It’s not just the model, it’s the context. This is all the data you use to build your model. The context evolves as the knowledge base of organizations evolves. You can’t predict how a model will react in 6 months. Stress testing is crucial here.
Finally, Towers highlighted the importance of working with regulators and how the FCA is helping to address this issue. “We published an updated AI last year in response to a government request, which explains how our current policies, such as the consumer duty, apply to AI. But this is truly a time for engagement and collaboration between industry and regulators.