From startups to legacy brands, you're making your mark. We're here to help.
Key Links
Prepare for future growth with customized loan services, succession planning and capital for business equipment.
Key Links
Serving the world's largest corporate clients and institutional investors, we support the entire investment cycle with market-leading research, analytics, execution and investor services.
Key Links
Providing investment banking solutions, including mergers and acquisitions, capital raising and risk management, for a broad range of corporations, institutions and governments.
Your partner for commerce, receivables, cross-currency, working capital, blockchain, liquidity and more.
Key Links
A uniquely elevated private banking experience shaped around you.
Whether you want to invest on your own or work with an advisor to design a personalized investment strategy, we have opportunities for every investor.
For Companies and Institutions
From startups to legacy brands, you're making your mark. We're here to help.
Serving the world's largest corporate clients and institutional investors, we support the entire investment cycle with market-leading research, analytics, execution and investor services.
Your partner for commerce, receivables, cross-currency, working capital, blockchain, liquidity and more.
Prepare for future growth with customized loan services, succession planning and capital for business equipment.
Providing investment banking solutions, including mergers and acquisitions, capital raising and risk management, for a broad range of corporations, institutions and governments.
For Individuals
A uniquely elevated private banking experience shaped around you.
Whether you want to invest on you own or work with an advisor to design a personalized investment strategy, we have opportunities for every investor.
Explore a variety of insights.
Key Links
Insights by Topic
Explore a variety of insights organized by different topics.
Key Links
Insights by Type
Explore a variety of insights organized by different types of content and media.
Key Links
We aim to be the most respected financial services firm in the world, serving corporations and individuals in more than 100 countries.
Key Links
How is the generative AI landscape evolving?
[Music]
Jack Atherton: Welcome to Research Recap on J.P. Morgan's Making Sense podcast channel. I'm Jack Atherton. I cover TMT Specialist Sales here at J.P. Morgan, as well as running the broader US sector specialist team. And today I'm joined by two of my colleagues, Mark Murphy, the head of US Enterprise Software Research, and Gokul Hariharan, the co-head of APAC, TMT Research. And today we're gonna be discussing the outlook for generative AI.
Mark Murphy: Thank you, Jack. It's a real pleasure to be here with both you and Gokul.
Gokul Hariharan: Thanks Jack for having us on this podcast.
Jack Atherton: So as a quick intro, I like to think about the AI investment and economic framework into six pillars as follows. Firstly, the AI leveraged silicon and hardware spend. So that's the NVIDIAs, the AMDs, the TSMC of the world. Secondly, the LLM operators, that's OpenAI, Gemini, Anthropic, as well as others. Third is the hyperscalers, so that's AWS, Azure, GCP. Number four is the cloud infrastructure and consumption software space, so that's the data layer, the orchestration, the monitoring, the operations, the security. That includes Databricks, Snowflake, Datadog, and others. Number five, it's the IT services that are helping to enable this, so it's IBM, Accenture. And then finally, it's the application software layer. So we're gonna be touching on each of those sectors today. But to start with Gokul, I'm gonna come to you on that first layer. So thinking about hardware spend on the data center, can you frame the term for us there?
Gokul Hariharan: Thanks, Jack. I think if you look at the hardware spend on the data center before we come into generative AI, roughly 200 to 250 billion of total spending on the hardware side in the data center each year. This is something that's been growing at about 10 to 15% every year. This can be considered as the total [inaudible 00:01:54] market, given that we expect GenAI to eventually percolate into each and every aspect of enterprise and data center spending. Of this 200 to 250 billion market size, maybe only about 20% is addressed by GenAI at this point in time. So there is still a lot of runway in terms of growth for generative AI and accelerated computing in general. Now, if you think about the pillars of demand, right now what we are seeing is mostly the large hyperscalers, the cloud scale players like Google, and Meta, and Microsoft, who are adopting AI and starting to integrate generative AI into their various key applications. They're also writing new applications as time goes by. This is an area that should have quite a bit of runway in terms of growth over the next few years, given that we are only in year two. The second area of demand that we see is on the enterprise side. 2024 is probably the first year where most enterprises have dedicated GenAI budgets. So we are starting to see the spending start to pick up, and this is likely to continue over the next few years. Usually, enterprise will adopt AI applications with a delay. So that's likely to happen over the next three to four years rather than the immediate future. Now, the third element of demand is something that is a little bit unique, which is primarily a lot of sovereigns or local government bodies stepping in and realizing that they also need to control the data that is being produced in various domains and potentially have large language models and GenAI business models evolving locally within countries. So this is quite new, and this is generating to tens of sovereigns also putting in meaningful investments in infrastructure as well as training their own large language models which are tailored to that particular geography. Predicting this demand is a little bit tricky, but this is clearly adding additional demand for generative AI.
Jack Atherton: Yeah, that sovereign spends very interesting. It came up a lot around the NVIDIA GTC day, which was one of the focus points. So the easy play in all of this has been the GPU winners, that's NVIDIA and AMD. Now we're starting to see investors look further afield to everything tangential in the data center. Can you talk a little bit about where those dollars are going to outside of just core GPU spend?
Gokul Hariharan: Sure, Jack. Within the core silicon itself, the one other company that we do highlight in a lot of our research is TSMC, who is essentially making almost 100% of all these AI accelerators that are being sold in the market. So they're kind of like the enabler selling a lot of the picks and shovels for this AI gold rush. Now, I think the GPU computing or accelerated computing is also changing the way modern data centers are being architected. This is impacting various components that comprise a data center. At the core of this is the huge amount of computing power that the GPU provides. But at the same time, it also requires a very special and fast memory called high bandwidth memory, which is critical to store all these humongous large language models and enable the fast and paralyzed computing that these GPU clusters are performing. So memory companies are clearly seeing a meaningful uplift in demand. Also, most of these LLMs or large language models of today are not trained by tens or hundreds of GPUs, but they're trained by clusters made of tens of thousands of GPUs. Meta, for instance, announced the creation of 24,000 GPU cluster just two, three weeks back to help their GenAI training effort. Now, when you have such large clusters, all of them need to be able to communicate with each other very fast. This requires re-imagination in networking capability. So the whole networking and optical communication space is also seeing a meaningful upgrade. Also, most of these large GPU clusters are extremely power hungry. Some of these data centers that are hosting these AI GPUs can get up to 50 to 100 megawatts or more of power consumption, which is 3 to 10 times larger than conventional data centers. So this also requires industry to completely redraw power delivery to these data centers. This also generates a lot of heat. So how to cool this infrastructure, uh, whether it is using liquid cooling, or immersion cooling, all of those of problems also create new challenges and new opportunities for companies. Lastly, we are also starting to see a lot of the new greenfield data centers that are starting to get built to specifically house GPU clusters. And these data centers are obviously going to be built in a completely different manner. So these are all the areas where we are seeing this impact manifest beyond just the core compute core GPU space.
Jack Atherton: Fantastic. Okay, Mark, coming to you. I know you are a software guru, but keeping on the hardware theme for a second, can you talk just a little bit about the CapEx expectations and how that's evolved for Microsoft and the other hyperscalers? They're the first cloud data center providers, or the first pillar where all of the investments into the hardware is coming. So it'd be great to hear your perspective there.
Mark Murphy: Right, Jack. That has seen a, I would call it a massive hockey stick upward win. Microsoft, for example, previously in history would spend about 11 to 12% of revenue on the cash portion of their CapEx, call it something like 20 to 30 billion per year. And that was for building out traditional Azure data centers. The massive increment we learned about, I believe, last summer. And that was when Microsoft led us in a direction of thinking about nearly 50 billion in CapEx for this current fiscal year that's gonna be ending in June. And that translates to something closer to about 17% of revenue in cash CapEx. Just to compare that 50 billion, I believe that is similar to Amazon's CapEx around 55 billion this year. Right below that, you would have Google running around 44 billion. And then if you want to go to player number four, there's a huge drop-off. Oracle is run rating something like 7 to 10 billion in CapEx. What I have found to be fascinating about this whole discussion is, ordinarily, when investors would see a ton of CapEx, they know it's gonna depress the free cash flow for that company for the time being, and it can be viewed quite negatively depending on the circumstances. But just because the psychology around generative AI has such transformative potential, they're kind of cheering on these huge CapEx ramps. Basically, the view is, well, the more you're spending, it's a signal of the pipeline of deals that you're looking at in generative AI. So then, it's an indication of how much land grab you are planning to take in the near term. So that's kind of how the CapEx spending trend is looking, and how the investor psychology around it has evolved. I would just add that I think at some level you try to put yourself in the shoes of these hyperscalers. This has to be a tough gamble that they're making. I mean, I keep saying that this is like buying toilet paper during the pandemic, the concept of buying GPUs right now, the supply is limited. People are stockpiling the GPUs, and so the prices are through the roof. And they've gotta be wondering if prices are gonna collapse. Once we get through this, more GPU supply comes online, you start moving past the training phase of these models where the compute is so intensive, and you get into more of this steady state situation. But then the problem is, if you make that bet, and you're wrong, if you wait it out, you're kind of missing on the early land grab. We were playing Monopoly here over the weekend with the kids. It's like passing on Broadway and Park Place, right? When you land on that, the dark blue ones that you're gonna want. So the psychology is interesting right now, and I'd say it's a calculated risk that they're all making.
Jack Atherton: Yeah, it's interesting that dynamic of good CapEx because there really isn't an ROI proof case that we've learned yet for this hyperscaler spend. It'll be interesting to see how that shapes out.
Mark Murphy: It's a bit of if you build it, they will come, right? We're sort of in that phase if you remember the movie Field of Dreams. (laughs)
Jack Atherton: I do. That's a very good point. And can you talk a little bit about the custom silicon programs that Microsoft, Amazon, Google have ongoing at the moment?
Mark Murphy: Yeah, so what I would say, Jack, is silicon is one area where Microsoft fell behind Amazon. Amazon had made these forward-looking investments, their chips that were called the Graviton chips for a whole bunch of years. They're already on their third generation there. And then Amazon moved on to these chips now called Inferentia, and then they have the Trainium chips. Those are AI-specific, and the idea is you're gonna provide cost-effective processing on both the training stage of the models, which you think of it just building the models, and then the inferencing stage of the models, which we think of it just running the models kind of steady state. And whereas Microsoft had been way ahead, and I mean dramatically ahead in the software layers surrounding generative AI, right, because they had made that investment in OpenAI in 2019. It's crazy to think it's a half decade ago they realized what was happening, and they made this grand slam of an investment. But Microsoft really had not much publicly, not much commercially with these custom silicon investments. And if you back up and you say, "Well, then what is happening here?" The question is are you a software company or are you a systems company. I mean, that's what they're all trying to figure out. And if your aspirations are pretty big and pretty grand, you're gonna say, "Well, we're a systems company," right? It's like a person who say you're gonna build a house from the foundation up rather than kind of buying an existing house. And if you build it from the ground up, you could control every little choice, every little detail, the foundation, what type of cement, the type of rebar and all that. Last year what happened is Microsoft announced this chip called Maya, and that's an AI accelerator. They're gonna start rolling out these Maya chips in their Azure data centers, and then to make it relatable what the Maya chips are gonna do, they're gonna power some of the co-pilots and then the Azure OpenAI service. And in my opinion, part of the reason to do it is, you're trying to position against NVIDIA's dominance with GPUs. NVIDIA could turn into a juggernaut, right? And I think all the hyperscalers are aware of it.
Jack Atherton: Just as a follow up there, this custom silicon that's being rolled out by the cloud providers, how does that compare versus an NVIDIA or an AMD, GPU, which I think about as a more general purpose chip versus the specialized chips that Mark spoke about?
Gokul Hariharan: Yeah, so I think the custom silicon is mostly addressing very particular workloads, but these workloads are also quite big. Google was the first to roll them out, almost 80 years back with their first TPU or Tensor Processing Units. And they had the science of workloads to tailor make a specific chip just for that purpose. I think as AI workloads mature, you'll have more and more large language models and more and more applications of AI. These large cloud scale providers will find applications which might need a specific piece of silicon, which can be designed for a very large task and can have the scale to perform better than a general purpose product from NVIDIA or AMD. But at the same time, do remember, at least most of the cloud providers also have a public cloud business, which is increasingly going to be running GPU clouds or AI clouds. And for most of those applications is probably still going to be using NVIDIA, and maybe some other GPUs, but mostly NVIDIA, given that they are going to be programmable GPUs. A lot of enterprise customers and other tier two, tier three cloud companies will have to use those GPUs and will need to be familiar with the software that is required to program those GPUs. When it comes to those kind of businesses, I think it's still going to be mostly NVIDIA. But for a lot of in-house use cases, I think we'll start to see the use of a lot more custom silicon over the next three to four years.
Jack Atherton: Great. And thinking about two different types of use case for hardware, so I wanna talk about training versus inference spend, and then also edge AI opportunities. Starting with training and inference, Gokul, can you just talk us through how that investment into the hardware differs for those two different parts of running the LLM?
Gokul Hariharan: Sure. So I think training is basically training these large language models. And here the emphasis very much is on speed, because it's a competitive game amongst large cloud providers as well as startup AI companies who's going to be fastest to get their latest large language model focused on a particular data set or a particular function out into the market. As a result, people have been using the latest and the greatest GPUs coming out from NVIDIA. Last year it was all the NVIDIA H100s. NVIDIA just announced their next generation GPUs, which should be available towards the end of this year. And once they're available, most of the training workloads will pretty much immediately switch to those GPUs. For generative AI, if you look at the last 12 months, almost 60, 70% of the spend has been on training, which is why we've been focusing on GPUs quite a bit. Now, the inference and fine-tuning of already established large language models is starting to pick up. We are starting to have more AI applications like Microsoft Copilot and other applications coming into the market. When it comes to inference, the focus will start moving on to different variables. At one point, it would be about cost of running these AI queries and how to efficiently use compute resources, how to deploy these in different applications, which might require different kind of computing workloads. So as inference starts to pick up, the kind of silicon solution used will also become a little bit more varied. Today, I think it's a 80 to 90% GPU market, given that the focus has already been on a lot more on training. Eventually, I think 80 to 90% of the demand is going to come from inference, but it's going to take some time. So as the inference applications start to pick up, we will start to see more diverse GPUs themselves coming in. As we discussed earlier, the custom chips will become a lot more important in a lot of inference applications, especially custom chips from Google, Amazon, Meta, and, as Mark mentioned, Microsoft. And lastly, eventually AI application companies will want their inference engines to be run, not just on the data center. They would also want it to be run on everyday devices that you and I use like PCs and smartphones. So what people call as edge AI. AI pushed to the edge. That will also start to pick up, and that obviously will represent a much bigger addressable market for AI application companies given that there are hundreds of millions of edge devices already out there.
Jack Atherton: And I think running into Apple WWDC in June will be interesting to see how they start to embrace AI in their product sets. Thank you very much for that. So moving over to software. Mark, starting with the cloud infrastructure space, so that includes the hyperscalers, but also some of the other names in your coverage, Snowflake, Datadog, et cetera, can you talk about what evidence we're seeing of AI-driven demand at the moment?
Mark Murphy: It's a great question because everyone sees how the spending is truly materializing for NVIDIA, starting for AMD, and this whole semiconductor kind of value chain and supply chain that Gokul speaks to including on the memory side. And they're definitely trying to extrapolate that, and they wanna look at that and say how and when is this gonna show up in the software landscape. What I would say is the earliest indication was what we mentioned earlier, it's this gigantic CapEx ramp that we saw from Microsoft and the other hyperscalers. In my mind, the timing of that is roughly last summer. And at that time, people weren't sure whether to trust it or not. But in the case of Microsoft, they had to wave their arms and say, "Well, we're gonna spend this money based on the demand signals." From GitHub Copilot, right, and then they have this Azure OpenAI service, they power ChatGPT itself, the Microsoft 365 Copilot that Gokul mentioned, the security Copilot and all that. The other evidence really is just listen to any software company's earnings call or their analyst day presentations. I'd say 80 to 90% of what they're talking about relates to AI, how they're positioning for it, why they're in a position to benefit, how they're gonna roll out products in the future, and it's a huge amount of the airtime. And when you go to trade shows and user conferences, walk up to any booth, what are people asking you about? And it's AI all the time. If you fast-forward to the second half of last year, our view was, you buy a ton of GPUs, say you're a large bank or you're a retailer, or you're a pharmaceutical company, obviously, you have to stick it somewhere. So they're in Iraq, they're going to use tons of electrical power, they're gonna have the memory and the storage and the network to draw upon. So generally that's gonna be a hyperscaler, right? And we think it's most commonly Microsoft, Azure, and secondarily Google, and in third place is probably AWS, you're training the LLM or some other type of an AI model. You're gonna use the GPUs and the hyperscaler. What we keep saying is there will be pin action around that for the other consumption software names. So if you need to monitor what's happening in that process, that's gonna be Datadog. And I would mention Datadog can monitor for bias and hallucinations in these models. I think a lot of people don't know that yet. There's a whole security software ecosystem. You might need to wire up some data flows, some connectivity. That could be something like Confluent. You might kick off an analytical process, that could be Databricks or Snowflake, as you mentioned. So we started to see that happening in the Q4 earning season. So the stabilization in hyperscaler growth is first, and then stabilization in some of the other software consumption models. And I would just say, again, my simple analogy, it's like building a house or even building a skyscraper. You start at the ground level. You dig a hole, you lay down some rebar. You're pouring concrete, making a foundation, and then that's kinda where we are. I mean, that's where the spending is right now. Later, you're gonna have the lumber, the steel beams, you're gonna think about electrical and plumbing, and it just goes on and on and on. It's just gonna take a while before you're thinking about tenants actually moving into the building, right? And that's gonna be when you see it at the SaaS software layer. And there's gonna be a long delay we think, before we get there.
Jack Atherton: Thinking about the hyperscalers, historically, we've seen more and more multi-cloud strategies get embraced by the enterprise. And last year, there was some fear creeping in that given Microsoft's investment and Open AI in the early lead that they had within generative AI, you might start to see a greater aggregation towards Microsoft and Azure. Can you just talk about how that's evolved as we've learned more about generative AI landscape?
Mark Murphy: Personally, Jack, I think it's going to be a real accelerant to multi-cloud. And we think as of now, it's disproportionately benefiting Microsoft and OpenAI. Ironically, I would say both aspects of what you said are true in this case because multi-cloud is a concept. It's basically bad for Amazon AWS, right? If you're the gigantic disproportionate number one winner of the, what I would call the pre-AI era of cloud computing, you're not gonna look favorably on the concept of multi-cloud. Amazon would annihilate that trend if they could, but what happens is that customers want to use the best technology that they can get, and AWS has always had that. I mean, they've had it for 15 years. All their cloud infrastructure for compute, memory storage, networking, and a lot of the other services was always the best. And AWS just fell massively behind in AI. They fell behind Microsoft and OpenAI. To some extent, they probably fell behind Google. So Microsoft just has the best products for AI today. It's not a secret. We see it very clearly in our CIO survey work. And interestingly, still to this day, I mean, basically Microsoft and Google have their own large language models and Amazon mostly does not. So what's happening is some of the customers that we're sole sourced on Amazon AWS. And there's gonna be a data gravity, right? So it'll be there a long time, but some of them are just kind of rethinking the future of their cloud infrastructure stack. And there's probably gonna be a little more gravitation of some of those workloads into Azure and into Google.
Jack Atherton: Fantastic. And thinking about that sixth pillar of AI investment that I talked about at the start which is the application layer, we're clearly very early days on the adoption curve of GenAI applications, but we've started to see it with Office Copilot that you touched on with Adobe Firefly and others. Can you just talk about how you imagine or expect the evolution of those products to play out?
Mark Murphy: Well, there's gonna be a very big difference in the sequencing of this whole value chain. And it is much clearer, and much more obvious in Gokul’s world right now than it is in software. And I would, again, just think about building a house or a skyscraper, right? So I think that's the analogy for how long it's gonna take these products like Copilot and Firefly to see really serious monetization through the applications. One thing I'd point out, think about if you're building an internal AI application, like you have a small internal IT team. Say we did it for pricing derivatives or swaps, or you're calculating a risk premium insurance or something like that. If a system goes in on a Wall Street trading desk, it might just be one computer system using it, or it might be a couple of individuals. And so, the risk is kind of contained. And then, think about taking Copilot or one of these products and rolling it out to, you know, a quarter million people. Companies are gonna take their time with that. And we see that in our survey work. It's gonna take about 12 months to reach 1% penetration with these products because companies just worry about it, and they're worried about bias. And it could have hallucinations. It's like a quarter million windows that would be open into the building, you can get wrong answers, you can have rogue users, they'll go in, they'll try to trick the system, and they're worried about leaking sensitive data. So it's gonna take a while on the app side, I believe, Jack. What we say is it's gonna go slow until it goes fast. And the time horizon for getting this mainstream based on our survey work, that is something like two to three years out from what we can see.
Jack Atherton: All right, I think that's a fantastic place to wrap up. Mark, Gokul, thank you so much for joining us today, and thank you everyone for listening.
Mark Murphy: Thank you so much. It's a real pleasure to be here. And Gokul, great having this discussion with you.
Gokul Hariharan: Likewise, Mark. And thanks, Jack for hosting us.
[End of episode]
Dive into the future of generative AI with Mark Murphy, Head of U.S. Enterprise Software Research, Gokul Hariharan, Co-Head of APAC TMT Research, and Jack Atherton, who covers TMT Specialist Sales. In this episode, they explore the burgeoning generative AI landscape: What are companies spending on? What’s going on with custom silicon programs? And where are we on the adoption curve?
This podcast was recorded on March 25, 2024.
More from Research Recap
Hear additional conversations with J.P. Morgan Global Research analysts, who explore the dynamics across equity markets, the factors driving change across sectors, geopolitical events and more.
More from Making Sense
Research Recap is part of J.P. Morgan’s Commercial & Investment Bank podcast, Making Sense. In each episode, leaders from across the firm share insights on the events that are shaping companies, industries and markets around the world.
This communication is provided for information purposes only. Please visit www.jpmm.com/research/disclosures for important disclosures. JPMorgan Chase & Co. or its affiliates and/or subsidiaries (collectively, J.P. Morgan) normally make a market and trade as principal in securities, other financial products and other asset classes that may be discussed in this communication.
This communication has been prepared based upon information from sources believed to be reliable, but J.P. Morgan does not warrant its completeness or accuracy except with respect to any disclosures relative to J.P. Morgan and/or its affiliates and an analyst's involvement with any company (or security, other financial product or other asset class) that may be the subject of this communication. Any opinions and estimates constitute our judgment as of the date of this material and are subject to change without notice. Past performance is not indicative of future results. This communication is not intended as an offer or solicitation for the purchase or sale of any financial instrument. J.P. Morgan Research does not provide individually tailored investment advice. Any opinions and recommendations herein do not take into account individual circumstances, objectives, or needs and are not intended as recommendations of particular securities, financial instruments or strategies. You must make your own independent decisions regarding any securities, financial instruments or strategies mentioned or related to the information herein. Periodic updates may be provided on companies, issuers or industries based on specific developments or announcements, market conditions or any other publicly available information. However, J.P. Morgan may be restricted from updating information contained in this communication for regulatory or other reasons. This communication may not be redistributed or retransmitted, in whole or in part, or in any form or manner, without the express written consent of J.P. Morgan. Any unauthorized use or disclosure is prohibited. Receipt and review of this information constitutes your agreement not to redistribute or retransmit the contents and information contained in this communication without first obtaining express permission from an authorized officer of J.P. Morgan.
Copyright 2024, JPMorganChase & Co. All rights reserved.
You're now leaving J.P. Morgan
J.P. Morgan’s website and/or mobile terms, privacy and security policies don’t apply to the site or app you're about to visit. Please review its terms, privacy and security policies to see how they apply to you. J.P. Morgan isn’t responsible for (and doesn’t provide) any products, services or content at this third-party site or app, except for products and services that explicitly carry the J.P. Morgan name.