From: Market Matters

Today’s diverse markets can feel vast and complex. From developments in voice, electronic and algorithmic execution, to regulation’s impact on liquidity, we explore the latest insights.

Subscribe

Trading insights: AI in the macro process, with Balyasny’s head of Macro Research & chief Economist

[Music]

Voiceover: Welcome to Market Matters, our market's podcast on Making Sense, the hub for J.P. Morgan Corporate and Investment Bank Podcasts. In each episode of Market Matters, we discuss the latest news and trends shaping markets today.

Chris Pulman: We spent considerable effort building a robust data and modeling pipeline from software engineering principles, which means that we're better able to handle multi-dimensional data sets in a profession that typically hasn't used them. The machine learning world obviously has used these very frequently. I'd say the economics profession is probably quite far behind in general.

Eloise Goulder: Hi, I'm Eloise Goulder, head of the Data Assets and Alpha Group here at J.P. Morgan. And today, I'm so excited to be joined by Chris Pulman, who is head of macro research and chief economist at Balyasny Asset Management and who has done a lot of work on automating macroeconomic processes, and also incorporating generative AI into the investment space. So Chris, thank you so much for joining us here today.

Chris Pulman: Thank you. It's great to be here.

Eloise Goulder: Could you start by introducing yourself and your background?

Chris Pulman: Sure. Well, funnily enough, I began my career in this very building 20 years ago, almost to the date. I've been at Balyasny Asset Management, BAM, for about six years now. And prior to that, I was a senior desk strategist on the macro trading desk at Morgan Stanley. And prior to that, I had trading roles as a portfolio manager at a fund called GLC and began my career at Lehman Brothers.

Eloise Goulder: How surreal to be back in the building that you started your career in. So Chris, in your current role as head of macro research and chief economist at Balyasny, where and how exactly do you create value?

Chris Pulman: So that's a great question. So Balyasny Asset Management is a large diversified, multi-strategy, multi-PM investment firm, which means that we have many investment teams across a variety of strategies ranging from equities long, short to equities arbitrage, macro, commodities, systematic and growth equity. And many of those teams will have their own views on how the world is evolving and some of those views may often be contradicting, and that's a good thing because it's diversity amongst the investment ideas in the firm. So what we try to do in the central macro research team is to try and ensure that the best views from the street consultants of academia, are put in front of our portfolio management teams so that they have the best information to test their assumptions for the trade ideas that they have on. At the same time, the street doesn't always do everything brilliantly. So it's important that we have a function to fill in those gaps. There are a few ways we can do this. One, which I think is increasingly becoming important across investment firms in general is technology. Often the question in research is, do you want to be first or do you want to be best? And we know it's very hard to be best at everything. You have to pick a few things to be really good at, but being faster than the next person, it's like the rule on the Savannah. So we try and use technology, economic expertise in order to ensure that our portfolio managers are on the front foot. The other area where you can be best, but you don't necessarily have to do everything in-house is deep dive research processes. You know, a typical investment firm which would be very centrally-driven, would come up with forecasting and views on all sorts of different parts of the economy, central banks, geopolitics and so forth, which is very expensive thing to do. So pulling in external expertise in specific domains where you can get a very high quality answer. Finally, providing a forum for debate. So, you know, one of the key things we have at BAM is a collaborative culture where we want to have exchange of ideas in order to make each other smarter 'cause typically, the investment teams have got very different specialisms in different areas. So what we try to do is to ensure that those areas where there are specialisms, others in the firm know the need-to-know information from their space that may be relevant to their domain. So we have well-attended meetings and macro calls with other investment teams in the different strategies like equities and commodities and so forth. That comes back to the other question, which is the flip side of having a collaborative culture is that you're at risk of groupthink. So, you know, we try to have a CIA red team approach of bringing in experts who are on the other side of the debate so that the assumptions, the key drivers behind those ideas can be tested and embedded into the investment process.

Eloise Goulder: That's absolutely fascinating, all of those ways in which you create value. And I love your point about not being able to be the best all the time. You need to be very selective about where you are the best, but speed being the first is equally an advantage and you need the technology to achieve that. I also found it interesting to hear where you're sourcing information and ideas from, whether it's the in-house experts that you have across different teams, or sell side research, policymakers, or academia. So Chris, how do you go about exposing all of that information and those data sources to your firm and in particular, how do you go about filtering that information?

Chris Pulman: That's a great question. So it comes back to technology. That's the only way really, that we can consume so much information. Even if you had a specific person devoted to doing that filtering, they would quit after six months because it's a terrible job and you can't read a hundred pieces of research a day. Whereas, generative AI can actually do that relatively well. And with advancements recently in the use of generative AI, it's possible to get much higher quality summarization than the earlier versions of some of these large language models. So that's the first part is filtering by saying, "Okay, well let's look at all of the economists and strategists on the street and decide who is good and who isn't." So that's the first step. You can filter by quality and then you can connect that to large language models to summarize it and aggregate those up again many times. We also do dimensional reduction of economic data, so the so-called GDP now cast and so forth, that's a- an easy way of taking lots of information and distilling it into a simple number that is easy for investment teams to use and interpret. I'd say where technology has made it different to the typical way of doing this is that we spent considerable effort building a robust data and modeling pipeline from software engineering principles, which means that we're better able to handle multi-dimensional data sets in a profession that typically hasn't used them. Like, the machine learning world obviously has used these very frequently. I'd say the economics profession is quite far behind in general, particularly when it comes to things like point-in-time economic data, moving beyond simple single factor models into things that are more disaggregated and then aggregate up.

Eloise Goulder: It sounds like you are pretty advanced when it comes to aggregating all of this information and then analyzing it, particularly in the context of GPT only becoming industrialized within a commercial sphere in the last 12 months or so. When did you start this process of aggregating all of this content?

Chris Pulman: We actually started this process a couple of years ago. There had been a view that having a research library that was searchable in a way, that is intuitive is a very useful tool to have because the typical question is a portfolio manager might say to their analyst, "What's going on in Australia? Are there any trades to do there?" And we conducted an experiment on this, so we got a typical analyst to answer this question and it would take them about seven hours and that's because it would take two hours roughly to find the pieces of information to read, which would involve going to the central bank website, log onto various investment bank websites and try and find the research with varying success, call up a salesperson to find out who the economist is that's writing on this subject, get the email back from them. About two hours in that process to get sufficient information. Then four hours or so to read it, followed by 30 minutes to an hour to synthesize that information into a discrete concise summary for a portfolio manager whose time is very valuable. We've got that time down to a couple of minutes.

Eloise Goulder: That's amazing.

Chris Pulman: To do that, we had to attach economic priors to the large language models in order to get good enough results. So I'm sure many listeners may have played around with these language models and found them a little bit underwhelming, but actually, you can get a lot more outta them if you approach them with a sort of software engineering mindset, which is to take building blocks. So there's this concept in software engineering called object-orientated programming, which is a bit like Lego. You get lots of building blocks and put them together to build something obviously a lot bigger. You can do that in quite fascinating ways to get these results out. And the key thing with that speed improvement is that rather than spending seven hours answering the portfolio manager's question, they now have 6 hours and 58 minutes to spend on coming up with a better trade expression, which will raise the expected Sharpe of the trade related to it. They can also do this many more times and we know the fundamental law of active management is that the Sharpe ratio of a strategy scales by the square root of the number of independent investment ideas.

Eloise Goulder: So N is critical?

Chris Pulman: Exactly.

Eloise Goulder: Number of trade ideas is critical. You can analyze more themes, more regions, more asset classes and come up with in theory, better ideas. And do you see that playing out in practice? Do you see the portfolio managers analyzing many more trade ideas at this stage?

Chris Pulman: Yeah, certainly, the investment teams throughout the firm. You very frequently walk past desks and people have the system open and the questions being answered. So I think in practice, we're definitely seeing that and it's really building.

Eloise Goulder: So you've shown there, the massive benefits to the portfolio managers and the researchers being able to analyze so many more ideas and ultimately to up the N and hopefully up the Sharpe ratio of their trade ideas. What about the benefits to your team, Chris? I mean, if your team has really cut a lot of time spent researching, what do they spend that incremental time on?

Chris Pulman: Yeah. So one example would be seasonal adjustments which are worth automating. Whereas, historically lots of economists would typically pull up a HAVA system and do it one series by one series, which is very slow, but several economic data releases have 500 components to them. So why not save the human time and automate that piece and leave the human to work on the qualitative adjustments that data's not going to capture? Where the data... That's another thing where, you know, we work with our commodities team leveraging their expertise in weather forecasting to embed those insights into automated forecasts. Central bank previews is another great example where previously I would spend a couple of days with an analyst writing a, a preview to send out to the investment teams, which would have to cover our own proprietary views, the views of street economists laid out in a very nice, easy to read manner with all the key information that a portfolio manager would want to see. So enough information to get detail, but not too much information.

Eloise Goulder: Mm.

Chris Pulman: The relevant pieces of central bank speech over the month and various charts that are relevant to the discussion, plus a very short summary at the beginning. So by having this platform in place, we've put these pieces together in a workflow that has turned this into something that takes about 30 minutes. So there are a lot of things I can do with almost two days time.

Eloise Goulder: (laughs)

Chris Pulman: So first of all, we can do a lot more. So rather than as a central team where we typically try to cover major economies, which are areas where we wouldn't want all the investment teams to replicate and duplicate efforts. Instead, we try and centralize some of that, but we can do a lot more then. So rather than just covering, say, the major central banks, we can cover some extra ones. So that's one example.

Eloise Goulder: Uh.

Chris Pulman: The other is spending time working on next generation more sophisticated economic modeling, which is at the forefront of central banks, who have several hundred staff. Having that extra time to devote to perceived long shot bets that may pay off into a strategy down the road just ensures that we're going to be able to continually deploy new innovations because we've got that extra time.

Eloise Goulder: Absolutely incredible the way the industry, within your firm at least, has evolved and how the job of a research analyst, as you say, has completely transformed from those seven hours to those two minutes. It's mind-blowing. Where do you think it goes from here? I mean, you've already made these enormous leaps. What happens next?

Chris Pulman: So that's an even more exciting question. So I kind of have as a medium term idea, can you make an AI economist that actually can do something useful? And I'd say today, probably you can't. So there's this idea that often comes up that let's use natural language processing to examine central bank speech and come up with a hawk dove type indicator to trade with. Well, there are various versions of these, but the trouble is that the language models are not very good at reasoning. They also fit to historic phrases that have been used very frequently. So just one anecdote on that front, from about 2010 to 2018, the Federal Reserve often used to use the phrase patient, or patience to imply they were going to be delaying hiking interest rates. So a naive implementation of one of these models, whenever it's come across patient or patience in the past few months, which has re-entered the Fed Lexicon has come up and said that's a dovish signal.

Eloise Goulder: Yeah.

Chris Pulman: Obviously, it's the opposite-

Eloise Goulder: Yes.

Chris Pulman: ... [inaudible 00:13:48]. Then it sort of comes back to the idea that these are statically trained models, but economics is a nonlinear dynamic system that we can only know a limited amount about and at all times, everything is context dependent. So an AI economist would have to have knowledge of context. You can do that if you have a well-built out data and modeling platform because you can inject that as some part. Back to the Lego concept, you can use the AI to do some bits and you can use more programmatic solutions for the other pieces, then aggregate them together. It just becomes a much more industrial project to generate that. But there is some evidence that language models possess some reasoning capability. We've definitely in our experiments come across that and have begun to apply it to central bank forecasting, not necessarily from the central bank speak, but distilling things into the types of things that matter to central bankers, so labor market, growth and activity, inflation, underlying inflation expectations and so forth. Taking that part of information with very basic reasoning questions and then putting them into a more systematic information theory-based approach for evaluating those decisions that come from the language model. But at some point, we won't need to do that, I assume because the technology will improve. So the key thing really is you've gotta be in a position where you can slot these developments in as they occur.

Eloise Goulder: So interesting. And when you think about the competitive landscape, it sounds like you're very much an early mover in aggregating all of this information together and then utilizing it in a smart way. Is it an inevitability that the competition catches up, and how do you envisage retaining your edge?

Chris Pulman: I think definitely some competition will catch up at some point, but the technology in the space in general, not just with respect to AI, is moving so fast it's become possible to do so many things so quickly that only those firms really that have invested over the last five to 10 years really stand much chance of being able to stay at the forefront. And for example, if I look at how use of technology and macro has changed since I began in the markets, it was Excel spreadsheets. A few people might know how to use eViews or Matlab, but it was all very basic.

Eloise Goulder: Mm.

Chris Pulman: It was typically a chief economist would go to conferences and listen to the speeches and then send back an interpretation. I think the returns to that approach are largely exhausted-

Eloise Goulder: Mm.

Chris Pulman: ... because information flows so fast. So the kind of firms that I suspect will do well in this are those that already have got very advanced modeling data technology platforms because research has become increasingly a technology question much more so than it is an economics question. In fact, when Bernanke and his review of the Bank of England's forecasting process noted that it needed to upgrade its computer system-

Eloise Goulder: Ah.

Chris Pulman: ... so industrial scale enterprise solutions for modeling I think are now a prerequisite.

Eloise Goulder: Fascinating. And as you said at the beginning, it's not just about being the best, it's also about being the first. And with technology, you have that ability to be early, to have the speed, even if no one's correct in reality.

Chris Pulman: Exactly. Having an inaccurate, but roughly okay answer very fast is better than having a perfect answer two days later.

Eloise Goulder: Absolutely. It reminds me of a conversation I had with Helen Jewell, the European CIO of Fundamental Equities at BlackRock, where she argued that the job of a single stock research analyst is to be directionally correct and early rather than absolutely wrong. In her words, seeking perfection is redundant. So you've spoken about the advantages of machine learning and large language models and it's clearly transformative in your world. What about the downsides? I mean, I often hear that there's not enough information, that N isn't large enough for a machine learning model to be accurate. Perhaps you have an overfitting risk. How do you think about that?

Chris Pulman: Yeah, it's really interesting in that in econometrics for decades we've been taught about the principle of parsimony, which is that we should have small models which are very simple, that explain the data the best with the least number of inputs. But I think Silicon Valley is really showing us that maybe we need to question that assumption-

Eloise Goulder: Mm.

Chris Pulman: ... and that, you know, the various language models that have billions upon billions of parameters in them with interactions and obviously very nonlinear, really questions that somewhat. The language models, when you play around with them, they seem to do relatively well with information they haven't seen in the training sets. You know, traditionally you would say that was massively overfit, but there is increasingly this concept in the academic literature of benign overfit of using interpolators, which exactly fit the data. The more data you throw into these and the relationships you pick up, the better the model becomes at fitting. So I think that's an open question, but where I do lie is that more important than having great data or a great model is having a robust pipeline and modeling platform to plug things into because I think the most important thing is to have enterprise-level infrastructure, robust pipelines for data and modeling that are relatively easy to interact with and can be relied upon. A lot of typical econometric models in the industry, for example, the GDP Nowcasts or inflation Nowcasts or various spectral to regression type approaches, they tend to require a lot of maintenance. These things tend to break because in the past decade when they've been put together, they've been put together by analysts who have learned to code as a skill that's bolted on to-

Eloise Goulder: Mm.

Chris Pulman: ... their main area of expertise. Maybe they're an economist, maybe they're a finance major and they've learned to code, like, in the late 90s, early 2000s you learned Excel. The trouble with that is it's not very robust. They've not thought about edge cases, how things tend to break. So you might have this super-duper model that has an indicator and it works for six months and then it breaks and you lose the indicator that you were relying on. You also lose the analysts 'cause they have to go away and fix the problem and by the time you've fixed it, you've moved on to other things maybe.

Eloise Goulder: Mm.

Chris Pulman: And you've lost confidence in the resilience of the thing. So I think being able to slot things in without worrying about that is a much better approach, and the key benefit here is it makes it easier to innovate. If you have great infrastructure, you're going to both be faster and better at analyzing new alternative data sets. You are going to be able to devote more valuable human time to cross-checking and evaluating judgmental biases. You're going to be faster to slot in a new economic model from the literature if your infrastructure is adaptable enough that the maths is simple to express within it and the data side can be hooked up without too much effort. You're just going to be able to do a lot more things and that comes back to this point about independent trade ideas and the Sharpe ratio.

Eloise Goulder: That's so interesting because I always ask our guests what's more important, the data or the model? But from what I'm hearing, a scalable tech infrastructure is really what you deem to be critical, whether that impacts the data itself and the data sourcing or the modeling. Is that correct?

Chris Pulman: Absolutely. If you've got that system, you can be much more confident that back-tested Sharpe ratios of strategies developed with these indicators are going to work out of sample-

Eloise Goulder: Mm.

Chris Pulman: ... if you've used true point-in-time data and created these models only using the information you had at the time.

Eloise Goulder: And I wanted to pick up on the point you made about people and the idea that robust tech skills are absolutely critical. It sounds like you deem that more valuable than a domain expert with a bolt-on coding skill set. Is that correct, and what do you deem to be the most valuable skill set in your world for the future?

Chris Pulman: That's quite an interesting question. I've found it easier to teach economics to mathematicians and computer scientists than I have mathematics and computer science to economists. Obviously, I have a relatively small sample set for that-

Eloise Goulder: Mm.

Chris Pulman: ... but the flexibility and the rigor, I think that math and computer science apply and a lack of bias on the process is quite useful. There's this concept in psychology, which is called belief bias. We tend to evaluate a conclusion based upon how plausible it sounds rather than how well the evidence supports it. And I would say mathematicians tend to be a bit more rigorous in answering those questions and less prone to narratives.

Eloise Goulder: It's so interesting. I mean, we often discuss this debate on this podcast series, but I certainly think you're much more on the spectrum of maths over domain than many of our guests. But from what I hear, your end conclusion is still to have both. If you're hiring a mathematician or a computer scientist, they've still got to learn the domain and that combination is ultimately the most powerful thing. And if we go back to that topic, you mentioned earlier that training your GPT with hypothesis and priors is key. Could you give me some use cases for this? How are you really effectively combining your economic priors with this technology?

Chris Pulman: Sure. So if you wanted to get an indicator that talked about the evolution of the economy or the evolution of central bank policy, you would think, okay, so we could ask the model on its own to give us a probability of rates going up or going down based upon some information we give to it in a retrieval augmented generation type approach. But that tends to give you very nonsensical results at times and it may be picking up incidental information, and you don't really know whether it's doing anything correctly outta sample. So you could say, well, I know how a central banker would interpret this piece of information, so I have to explain to the model how to evaluate information about the labor market. You might read things where the labor market's coming back into balance. That sounds like a positive sentiment type statement. So your naive approach of historically and natural language processing driven strategies would be to say, okay, well it's a positive sentiment score, but that again is context dependent-

Eloise Goulder: Huh.

Chris Pulman: ... because it could be coming into balance from overheating, which is, you know, potentially rates are on hold or maybe we're not gonna hike quite as much 'cause it's coming better into balance, or it's coming better into balance from below and that's a different outcome. That's more room to grow, so that's probably quite a positive thing for equity markets in that example. So adding economic priors about how to treat that information and then adding it up for the different areas, I think gives you a much better way of interpreting these models. You know, another one might be, we're talking about trading U.S. interest rates based upon street research, for example. If we just took in all of the research from all of the banks, consultancies, third party research shops, you need to guide it to pay attention to the right people as opposed to paying attention to everybody. It's a bit like trying to improve the signal-to-noise ratio 'cause we've got this new data set that language models have allowed us to attack in a way that historically those who were not natural language processing driven academics were able to.

Eloise Goulder: Absolutely. And as you say, signal-to-noise because there is so much potential noise when you're aggregating that much content, far more content than any individual analyst will ever go and see. So Chris, you're clearly continuing to develop these models, but over the next couple of years, what's really next for you and your team?

Chris Pulman: Yeah, so continuing to leverage the platform and be in a position that we can slot the next generation of artificial intelligence type models into our existing infrastructure in order to try and be amongst the first to benefit from those. I have the aspiration of creating an AI economist, which has various building blocks to it and obviously is a very substantial challenge. But, you know, we do see there is some reasoning ability in these language models at a very basic level. So if you can build those small pieces by writing recipes or Lego instruction manuals, maybe we can get there.

Eloise Goulder: What an aspiration, an AI economist. Certainly something to watch. Well, thank you so much, Chris. This has been such an interesting discussion and I've learned so much, and how fascinating that this discussion should be so focused on data and technology and automation in the context of your role as head of macro research and chief economist. I mean, it tells me that in order to be at the cutting edge of what you do within your domain, those innovation skills really are critical right now. So thank you so much for taking the time to speak with us today.

Chris Pulman: Thank you.

Eloise Goulder: Thank you also to our listeners for tuning into this bi-weekly podcast series from our group. If you'd like to learn more about Chris's work and Blayasny Asset Management, then please do see the links in the show notes. Otherwise, if you have feedback or if you'd like to get in touch, then please do go to our team's website at jpmorgan.com/market-data-intelligence where you can reach via the contact us form. And with that, we'll close. Thank you.

Voiceover: Thanks for listening to Market Matters. If you've enjoyed this conversation, we hope you'll review, rate, and subscribe to J.P. Morgan's Making Sense to stay on top of the latest industry news and trends, available on Apple Podcasts, Spotify, Google Podcasts, and YouTube. The views expressed in this podcast may not necessarily reflect the views of JPMorgan Chase & Co. and its affiliates, together J.P. Morgan. They are not the product of J.P. Morgan's research department and do not constitute a recommendation, advice, or an offer or a solicitation to buy or sell any security or financial instrument. This podcast is intended for institutional and professional investors only and is not intended for retail investor use. It is provided for information purposes only. Reference to products and services in this podcast may not be suitable for you and may not be available in all jurisdictions. J.P. Morgan may make markets and trade as principle in securities and other asset classes and financial products that may have been discussed. For additional disclaimers and regulatory disclosures, please visit www.jpmorgan.com/disclosures/salesandtradingdisclaimer. For the avoidance of doubt, opinions expressed by any external speakers are the personal views of those speakers and do not represent the views of J.P. Morgan. Copyright 2024, JPMorgan Chase & Co. All rights reserved.

[End of episode]

In this episode, we hear from Chris Pulman, head of Macro Research and chief Economist at Balyasny Asset Management. Chris discusses the role of macro research and economics in creating value within the investment process, the use of data, models, tech systems and generative AI within the macroeconomic space, as well as the benefits of marrying fundamental expertise with technological processes and the skillsets required to excel in this area. Chris is in discussion with Eloise Goulder, head of the Data Assets & Alpha Group at J.P. Morgan.

To learn more about the Data Assets & Alpha Group: https://www.jpmorgan.com/markets/market-data-intelligence

This episode was recorded on June 26, 2024.

More from Market Matters


Explore the latest insights on navigating today's complex markets.

EXPLORE EPISODES

More from Making Sense


Market Matters is part of the Making Sense podcast, which delivers insights across Investment Banking, Markets and Research. In each conversation, the firm’s leaders dive into the latest market moves and key developments that impact our complex global economy.

Listen Now

The views expressed in this podcast may not necessarily reflect the views of J.P. Morgan Chase & Co and its affiliates (together “J.P. Morgan”), they are not the product of J.P. Morgan’s Research Department and do not constitute a recommendation, advice, or an offer or a solicitation to buy or sell any security or financial instrument.  This podcast is intended for institutional and professional investors only and is not intended for retail investor use, it is provided for information purposes only. Referenced products and services in this podcast may not be suitable for you and may not be available in all jurisdictions.  J.P. Morgan may make markets and trade as principal in securities and other asset classes and financial products that may have been discussed.  For additional disclaimers and regulatory disclosures, please visit: www.jpmorgan.com/disclosures/salesandtradingdisclaimer. For the avoidance of doubt, opinions expressed by any external speakers are the personal views of those speakers and do not represent the views of J.P. Morgan.

© 2024 JPMorgan Chase & Company. All rights reserved.