Amplifier working file

From: Market Matters

Today’s diverse markets can feel vast and complex. From developments in voice, electronic and algorithmic execution, to regulation’s impact on liquidity, we explore the latest insights.

Subscribe

Trading Insights: How the Bank of England harnesses AI and data

[Music]

James Benford: I can definitely see a logic AI is like the ultimate hive mind that brings together all the different and information sources, so that being consensus. And the fact that there is a strong financial incentive to know when that's wrong is a very good thing, right? … that is exactly why I think there will be always a role for humans to challenge and apply judgment and work out when that gives you the wrong answer and when it doesn't.

Eloise Goulder: Hi, I'm Eloise Goulder, head of the Data Assets and Alpha Group at J.P. Morgan. And today I'm so pleased to be sitting here in the Bank of England, joined by James Benford, who is chief data officer at the Bank. So James, thank you so much for joining us today.

James Benford: Thanks for having me. It's a real pleasure.

Eloise Goulder: So James, could you start by introducing yourself and your background?

James Benford: Sure. So, I'm the bank's chief data officer. I've been at the bank for it will be 20 years in September, and I spent most of that time as an economist in various different functions. a previous role as Director of the Economics Group at the Treasury where I set up a data science hub. But I'm now trying to underpin a transformation in the bank's data and analytic capabilities as chief data officer.

Eloise Goulder: And what exactly is your remit and what are you responsible for?

James Benford: So the bank's central data area is around 200 people. About half of that, work through the bank's data collections. The bank collects an awful lot of data. We maintain about 30,000 published statistical series, collected and curated from the banking system by and large. We maintain SONIA, which is the critical interest rate in money markets, reference to trillions of pounds of derivatives. But we also collect a lot of regulatory data too. In fact, our regulatory data collections are probably 10 times larger than the statistical ones. Responsible for the bank's overall data strategy and the operation of the data platforms that we have here. One big responsibility there is to guide the bank on its cloud journey. We're moving onto the cloud in terms of how we work with and analyze our data. And then lastly, within my area, we have a Centre of Excellence for Advanced Analytics and Data Science. And the real focus of that is AI and what we're going to do with AI and its latest generative version in the bank.

Eloise Goulder: So there's both the data collection, data dissemination part of your role, but there's also this analytical part of your role. So, could we start with the former – the data – and how you think about using and managing it within your processes at the Bank of England?

James Benford: Yeah. So, the Bank of England, as I'm sure many people know, is responsible in the UK for monetary and financial stability. How do we do that? We set monetary policy, we set the interest rate, we set financial policy for the system as a whole through the Financial Policy Committee. And of course, we also regulate individual banks, insurance companies. We have a lot of our own operations that implement monetary policy in different ways that provide liquidity to the financial system. And pretty much all of that is powered by data in some way or other. It's no exaggeration to say that data analytics really is the lifeblood of the Bank of England. So overall we're trying to make sure we make the best possible decisions informed by the best available data and analysis. What do we have? Well we collect. Like any other organization, we also buy a lot of data, financial market data. And we acquire a lot from organizations like the ONS, government departments. So we have a really rounded picture of what's going on in the economy and financial system. We also generate data ourselves. And that happens in lots of different ways. We run the country's payment system. Billions of pound run through that every day, and we see the individual transactions. We manage the collateral of loan books that banks bring to us and we see the performance of that collateral, for example, in our markets area. And we get data every day, hundreds of millions of rows of it, from trade repositories on derivatives and securities financing transactions. All of that gives us really granular insights into what's happening in the economy and financial system.

Eloise Goulder: It's fascinating quite how much data that you have access to, whether it's the data you collate and create yourself or whether it's the data that you buy.  And in terms of how you use data the Bernanke review on forecasting and modelling, was released relatively recently. It looked at how the Bank of England itself can incorporate new data and research findings into the decision-making process. So is your work very closely linked to that, and how could you talk to the evolution in your analysis of data?

James Benford: So the work I do is a critical foundation to Bernanke and one critical program within the overall portfolio of investments in data analytics that I'm responsible for is a program to set up a new data platform for economists at the bank and to populate that with the right data that they need to do their work and also a platform to house the bank's models and make it much easier, much more efficient, and much more sustainable in terms of how we run those models. So that work actually started before Dr. Bernanke's review, it's progressing well and it's an essential foundation for it. The review also covers the MPC's processes, questions like the distinctions between the staff's judgments on the outlook for the economy and the MPC's judgments on the outlook for the economy that will build on that foundation but are outside the scope of it. You also asked about how things have evolved over time.

Eloise Goulder: Yes.

James Benford: There's one really nice story in terms of the bank's use of data and it's still here today. So, back in 1805, when the bank also used to manage the supply of money to the economy to the same aim to keep prices stable, the liberations the bank's quarter directors were guided by a dial that's in the bank's courtroom or boardroom and the dial was linked to a weather vane on the roof. And it said, as weather vanes do, which way the wind was blowing. And the trick was if the wind was blowing into London from the sea, the ships would be coming in, trade would be soon to expand. And so you had to expand the supply of money to meet the demand for trade. And then when it went out again you had to do the reverse and that dial was used to set monetary policy in the 19th century. In my terminology that is a real-time dashboard that is a model of the economy, an example of how actually data-driven decision-making has been right at the heart of the bank from the very beginning. Obviously today we have the cloud and data platforms to power real-time insights, but you know the intent is identical. So technology changes, but the ways of working don't necessarily change too much.

Eloise Goulder: That is absolutely incredible to hear that example and your point that processes have always been data-driven. Maybe the data that we collect changes, maybe the processes we use change, but actually have always been data-driven. I think that is a really important point.

James Benford: But I think that point is joined to something else, which is true of any long-lived organization is that there's a lot of legacy. So there's a lot of legacy systems that may not have all the capabilities that modern systems can have in terms of scalability of compute or the ease with which one can share data across an organization or between organizations that we need to replace. And just like everywhere else, the bank has a lot to work through there. It's one of the areas that we're focused on. But change is also opportunity. And when you replace a new system, there's the opportunity to rework the processes to make them more efficient, to unlock new capabilities.

Eloise Goulder: Well, I think it's clear that the robustness of the underlying data platform is critical to all of the analytical techniques you then use. Speaking of analytical techniques, we've obviously moved from a world of statistics and econometrics to a world of supervised machine learning techniques, to a world of AI, to a world of generative AI as you said. Within the Bank of England are you using all of those processes and how do you think about the difference between them and to what extent is generative AI a revolution versus just an evolution?

James Benford: Great question. So we have been using machine learning at the Bank for at least a decade and it's grown many different applications actually. There's a core one in our data collections work to spot errors in the data and make sure that the data is of high quality, critically important. You can also use machine learning to now cast where the economy is, to help predict coming movements in financial market prices. That's a big use case for us. It's great for measuring, quantifying risk to machine learning techniques. We've also managed to, in some cases, blend the old and the new. So there is one neural net application that estimates the slope of the Philips curve, which is the relationship between unemployment and inflation, which was notoriously hard to estimate using machine learning techniques, which is an example of something that blends theory and something that is sometimes a theoretical. What it's been very good at is detecting a change in the relationship and why that relationship might be changing in a way that would otherwise have been quite hard to detect it with traditional techniques.

Eloise Goulder: Fascinating.

James Benford: So, yeah, I don't think you could have detected that change so conclusively without a technique like that. So you asked about Gen AI, to what extent is it evolution versus revolution? I mean, it is still a prediction, right, that gen AI is just predicting the next word based on the context. But there are three dimensions to which I think it is a real revolution. The first is the size of the models. So the bank's core macro model, 200 parameters. That the bank's largest application of a neural net, 100,000 parameters. The order of magnitude for the biggest large language models is 400 billion. So it just shows the complete step change in the complexity, the size and power of those models. The other two dimensions are the data that they can work with. So large language models, it's not just numbers, it's text, video, sound, all data, and in real time. They can work with a changing context in real time. And then I think the last piece and this is where I think it's really revolutionary, is it's for everyone. So anyone can use chat GPT, anyone can use any other large language model because it's in natural language. Previously the application of machine learning was confined to some highly trained, highly skilled, often PhD level analytical folk and then wasn't open to everyone. So I think that that's why the Governor Andrew Bailey last year said it had the hallmarks of a general purpose technology and the kind of technologies that have caused a revolution in ways of working in the past. So I would go to revolution as well as aspects of evolution.

Eloise Goulder: That's extremely well articulated. Would you also argue that it's likely to be a revolution within the Bank of England when you think about your processes?

James Benford: Definitely has the potential to be and we've got a real focus on that. So just go through different objectives. So a really big one for us, I'm sure, in fact I know it is one across the financial sector from the surveys that we run, is realizing efficiency benefits and productivity gains. A second one, which I think is really important is making sure that we're sufficiently challenging to the way we think. And Gen AI, what it can do is basically really increase the capacity for us to process and access different types of information. And that can be very helpful in challenging a particular conclusion. One trickiness with Gen AI is unlike a human, I think also unlike many smaller models, it's very difficult to get the model to properly answer the question of how it came up with something. Like if I asked you a question and then I asked you, why did you say that? You couldn't explain why you came up with the answer. You ask Gen AI a question, it will predict the answer to that question based on how it's been trained and based on the context and then ask it how it came up with that, it will predict the answer to that question based on it. It doesn't actually tell you how it came up with the answer. And I think that makes it quite tricky to rely on as an input into a decision, but it can be useful to challenge a conclusion that a human's already got to. So like many other organizations, we've been trialing Copilot at the bank in different areas both a general productivity tool and particularly in coding. We're now rolling that out across the organization. The evidence that we've gained from those trials suggests that there are significant productivity benefits. They're particularly obvious for coding and that problem I mentioned of legacy is perfect for AI. You've got lots of legacy code and you want to quickly convert it into modern code. There are lots of opportunities for that. We're also building some bespoke applications of AI as well. So one area which we're looking at is basically there's a general pattern of business at the bank. It doesn't matter if you're in economics area, in market intelligence or supervisor intelligence. It's basically meet someone, write up the meeting, perhaps quantify how the meeting went on different aspects, have lots of those meetings, summarize them and then draw thematic insights very easily and ably supported by AI tools. So we're building a bespoke tool to support that process and aid the efficiency and capabilities in that area. Another one is in our supervisory area. We get a lot of PDF documents or so-called unstructured data firms risk packs, MI packs, pillar three disclosures could go on and it's no exaggeration to say it's more than anyone could possibly hope to read. So we're using AI tools to process all that information, to write summary reports, to flag areas of further investigation. The third area is the business of public consultation. So we put out, a consultation on central bank digital currency, sometimes termed Britcoin. Lots of views on that, it turns out. In fact, we got 40,000 responses to that consultation paper. And using natural language techniques was a way to make sure we captured those responses in a very systematic and efficient way. And actually, we got that approach peer-reviewed. So that's something that we can safely use more broadly on consultation. So there's lots of different applications of Gen AI. Touch on one last one, which I think, perhaps, is the 2025 thing. The last few years was people getting excited about GenAI towards the back end of last year and this year, it's been about agentic AI. I think there's a lot of potential there. And what that really is, is you've got these really large general purpose large language models which are there to try out many tasks. But with agentic AI, what you can do is tailor it to a specific task. And that's actually one way to control the model so it becomes more reliable for the thing you're using it to do. So it's tailored to a specific purpose and then once you've got comfortable with that you can then sequence a number of them together and automate a workflow and then you may allow it to press buttons, so it's specific and, potentially autonomous. I think those are the two things that I'd point to.

Eloise Goulder: Absolutely makes sense, and trained on a very specific set of documents.

James Benford: Yes. The exciting thing about it is it's actually quite easy to build your own agent. And so it's another version of that democratization of tools that people can actually build their own AI model in a relatively easy way. And I think that too will unlock a lot. There is a very important ingredient in all of this, which is the ability to trust and rely on the model, right and the fact that the model is big and complex is a bit of a barrier there but I think that's one important area that will need to be worked through as we become more agentic in the use of AI and another facet of agentic AI is not just build something very specific, the natural next step that can come with it is give it control or change the position of the human in the loop and I think we're still learning about the circumstances when that's a safe thing to do and when it's not.

Eloise Goulder: Thank you so much for going through all of those use cases. And I think a lot of it rings true from our side of the financial sector, the idea that it's a productivity gaining tool for coding, very clear. Also, the way it's transforming the research process. We see something very similar here on the sell side in terms of meeting with corporates, writing up notes, aggregating those notes, coming up with a conclusion. I also really enjoyed hearing your perception of the benefits of AI, the efficiency, productivity, and also challenging thinking, because I guess we're all in the business of trying to avoid biases. We as individuals have significant biases, and the Gen AI, in theory, is taking views from many more participants. Of course it's got its own bias based on the info on which it trained – which can be significant – but it has the advantage of being trained on a broader set of information than a summary from one individual or team with their own biases.

James Benford: It's an interesting one, isn't it? The important thing here is what's the use case that you have in mind and what's the outcome that you're working towards? And then, there are important areas where we don't want the future to look like the past. I do think any model, you do need to know how it's been trained, its limitations before you kind of put it into use. So AI is no different. In fact, in many ways, carries that same property, but it's much harder to actually figure out the biases in it. That's the challenge.

Eloise Goulder: Well, you've spoken a lot about all of the data you collect, all of the analytical tools that you use and the evolution of that and perhaps the revolution of that. We always ask our guests what's more important the data or the model and the analytical techniques. So James what's your answer on that?

James Benford: So it's worth bearing in mind two things. On the data the past is no definitive guide to the future. On the model all models are wrong some are useful. Right, so both potentially missing something. And you know, you can't have the model without the data, and often you need the model to get the insights from the data. If you made me pick one just out of the two, I would go with the data. I've always been an empirical person.

Eloise Goulder: Your role might suggest that you'd pick the data.

James Benford: It might suggest that. It's interesting. I mean, if you found some data that's giving you some secret source that's enabling you to predict something in a way that other people can't, I think it's quite important to know why it's doing that and how it's doing that, because that's important in identifying both whether that will continue to work in the future, whether it's a reliable guide, but also, whether that's a generally accessible source of information or not that others do not have, which I think is also important to know. There was one part that's not in your question, which is the person, right? I do think human in the loop is, you know, the latest way of saying it. But judgment is a real thing. And it's based on experience. It doesn't come from a model, it doesn't come from data, and you always need it. So was that another thing, I would pick the human on top of either of those two.

Eloise Goulder: What a brilliant answer. Your point that actually neither is perfect, I love that phrase, all models are wrong, but some are useful. And therefore, the importance of the human in the loop or the importance of the judgment is paramount. So we've spoken about your use of data and analytical tools within the Bank of England, but your organization also has an incredible lens on what's going on across the wider economy and what's going on in the private sector. So what are your observations on this and how it's evolving over time?

James Benford: So we have a regular survey for the financial sector, the use of AI and machine learning, plainly is increasing every year and that's predicted to continue to happen, so I think about three quarters of firms, say they're using AI in some form. In terms of how they're using it so far, there's a big concentration in firms' internal risk systems, cyber systems, AML controls, and in data analytics. There's increasing focus, as we've discussed too, in terms of AI as a productivity and efficiency tool. Maybe stepping back a bit it’s worth thinking through what greater use of AI means for markets themselves. And at the end of the day, the trends are more data and more tools for processing data. And one way of viewing what markets are is a market mechanism for processing information. So the fact that there are more generally available data and more generally available powerful tools for processing it should make markets trade more efficiently. But there is an edge to that that is worth thinking about. It's something that one of my colleagues at the bank, Jonathan Hall, mentioned in his speech last year, which is you can run a train of logic that if everyone's using the same data, everyone's using these models, there could be a greater tendency towards lower dispersion of views in markets. And that could definitely give rise to situations where markets are more brittle and more susceptible to shocks and particularly if something happens that as we discussed the models aren't built to anticipate or hasn't happened exactly in the same way as it's happened in the past. It will be interesting to see how AI causes markets to adapt differently and it's one reason why there always will be a role for humans to apply judgment in those situations. Another thing which I don't think we really know how to do yet is to stress test the systems we're building. So if you have a quantitative model, you can put different numbers in, right? If you have an AI model or co-pilot that's plugged into an organization’s knowledge base and the questions that it asks vary dependent on the context that you're giving it how on earth do you stress test how that is gonna behave in six months' time or a year's time? And that, I think, as we softwire AI into our decision-making, is something that we'll need to work through and think through.

Eloise Goulder: It's a fascinating question, the extent to which greater availability of data and analytical techniques, including Gen AI, make markets more efficient in that information becomes more widely available to more market participants… and it also blurs the lines between different investor types. So for example the retail investor, or perhaps we should say the non-institutional investor – which we’ve seen rise enormously in share of volumes post Covid, across the US and many parts of Asia – this group has greater access to data and content online, and also now has AI tools which as you say are widely available. And they also have an ability to trade readily via many online platforms. So they are an increasingly significant party in identifying market opportunities and inefficiencies and trading on them. And then from institutional investors, we see this blurring of the lines between ‘quant’ and ‘fundamental discretionary’ investors. Given the ability to more easily analyze unstructured and textual data, particularly with the help of LLMs. So I guess all of this ‘blurring of the lines’ and greater market participation sounds quite good for market efficiency. But, then on the other hand, the issue is we still the risk of group think. We still have for example, social media networks, where certain voices are more amplified than others or certain data sets and models might be used more widely than others. And so I guess the counterargument is that we could still end up with a more narrow dispersion in views – as you sighted James, which might not always turn out to be correct! And on a related note to that, Helen Jewell, the European CIO of Blackrock Fundamental Equities, she argued on this podcast series that AI in a way it becomes consensus. And if you really want to outperform the market, then you need the human judgment on top. You need the human in the loop to see where the AI might be wrong. Because as you say, it is just a predictive model at the end of the day.

James Benford: So yeah, obviously, the growth of retail investors is a long running trend. And it must be true that the Gen AI tools allow retail investors to use more data to drive their decisions. I can definitely see a logic AI is like the ultimate hive mind that brings together all the different and information sources, so that being consensus. And the fact that there is a strong financial incentive to know when that's wrong is a very good thing, right? Because that creates a way to correct. But that is exactly why there's a role for humans. I think there will be always a role for humans to challenge and apply judgment and work out when that gives you the wrong answer and when it doesn't.

Eloise Goulder: So before we close, James, as you look to the future, where do you see the financial landscape evolving?

James Benford: Important to start by saying we're still in the foothills, right, and that you can see the base of innovation in the models every week, and they'll become more efficient. We're only just starting to see the process of adoption of that technology. So there's a long phase of adoption and transformation to follow. And an important question for all of this, which, you know, it's still being worked through in every organization in a market wide sense too, is driving the value from all this innovation and the adoption of the technology. You can see it in efficiencies but that still has to be fully realized. A second thing is fundamental to what AI does is we're moving work from people to machines. One does need to think through that through a sustainability lens and work through where the energy is going to come from to meet those demands and also you know there's a need to think through the future of work too and we're a long way from that. There's lots of views on this one but the long history of time has shown that technological progress augments people and you end up better off, but it also shows the transition needs to be thought through carefully. One increasingly important area, including at the bank is you've got an incredibly powerful model that can do many more things for many more people than was possible before. So the question of what the right thing to do is even more load-bearing, so the area of data and AI ethics I think would be a really important growth area, something we're thinking of very carefully at the bank. We have a data and AI ethics policy and I suspect that will be one of the new roles that will come from this too, both from making sure what we're doing is safe but also thinking about things like fairness, bias, equity, those types of issues too.

Eloise Goulder: Thank you so much, James. I mean, we've covered such a lot and as you say, we're just at the foothills of the gen AI revolution, to use your terms. We are just at the adoption stage, and it's going to be such a fascinating thing to watch. And there are so many opportunities with that and so many implications for market efficiency and market stability, I guess, is absolutely key in your world. What are the implications for alpha opportunities? Also key in our world. But also, as you say, what are the implications for sustainability, given the energy demands as we move from human to compute? So thank you so much for walking us through all of that.

James Benford: Thank you so much. I really enjoyed that being a great conversation. It's been a pleasure also to host you here at the Bank today.

Eloise Goulder: Thank you also to our listeners for tuning into this bi-weekly podcast series from our group. If you'd like to learn more about James's work at the Bank of England, then please do see the links in the show notes. Otherwise, if you have feedback, or if you'd like to get in touch, then please do go to our team's website at jpmorgan.com forward/market-data-intelligence, where you can reach out via the contact us form. And with that, we'll close. Thank you.

Voiceover: Thanks for listening to Market Matters. If you’ve enjoyed this conversation, we hope you’ll review, rate, and subscribe to J.P. Morgan’s Making Sense to stay on top of the latest industry news and trends, available on Apple Podcasts, Spotify, and YouTube.

The views expressed in this podcast may not necessarily reflect the views of J.P. Morgan Chase & Co and its affiliates (together “J.P. Morgan”), they are not the product of J.P. Morgan’s Research Department and do not constitute a recommendation, advice, or an offer or a solicitation to buy or sell any security or financial instrument.  This podcast is intended for institutional and professional investors only and is not intended for retail investor use, it is provided for information purposes only. Referenced products and services in this podcast may not be suitable for you and may not be available in all jurisdictions.  J.P. Morgan may make markets and trade as principal in securities and other asset classes and financial products that may have been discussed.  For additional disclaimers and regulatory disclosures, please visit: www.jpmorgan.com/disclosures/salesandtradingdisclaimer. For the avoidance of doubt, opinions expressed by any external speakers are the personal views of those speakers and do not represent the views of J.P. Morgan.

© 2025 JPMorgan Chase & Company. All rights reserved.

[End of episode]

In this episode, Eloise Goulder, head of J.P. Morgan’s Data Assets and Alpha Group and James Benford, chief data officer at the Bank of England, discuss the potential for data and AI to transform decision-making both at the Bank of England and across the finance industry more widely. They also discuss the impact of generative AI on market efficiency and financial stability, the balance between the use of AI and human judgement, as well as the ethical considerations shaping the future of financial services.

Learn more about the Data Assets & Alpha Group

This episode was recorded on March 19, 2025. 

More from Market Matters


Explore the latest insights on navigating today's complex markets.

EXPLORE EPISODES

More from Making Sense


Market Matters is part of the Making Sense podcast, which delivers insights across Investment Banking, Markets and Research. In each conversation, the firm’s leaders dive into the latest market moves and key developments that impact our complex global economy.

Listen Now

The views expressed in this podcast may not necessarily reflect the views of J.P. Morgan Chase & Co and its affiliates (together “J.P. Morgan”), they are not the product of J.P. Morgan’s Research Department and do not constitute a recommendation, advice, or an offer or a solicitation to buy or sell any security or financial instrument.  This podcast is intended for institutional and professional investors only and is not intended for retail investor use, it is provided for information purposes only. Referenced products and services in this podcast may not be suitable for you and may not be available in all jurisdictions.  J.P. Morgan may make markets and trade as principal in securities and other asset classes and financial products that may have been discussed.  For additional disclaimers and regulatory disclosures, please visit: www.jpmorgan.com/disclosures/salesandtradingdisclaimer. For the avoidance of doubt, opinions expressed by any external speakers are the personal views of those speakers and do not represent the views of J.P. Morgan.

© 2025 JPMorgan Chase & Company. All rights reserved.