Institutional investors are adopting increasingly complex investment strategies that require more data and advanced analytics. This has driven investors toward AI-readiness for data management, which can accelerate their ability to obtain actionable intelligence.  The rise of ESG strategies and the structural convergence of public and private assets are additional factors that are complicating data workflows.

The top data challenges

Data access, consumption and integration are major challenges facing asset managers, asset owners and corporates. Only a quarter of financial organizations can source and manage all their data inputs effectively, according to a report from Worldwide Business Research (WBR) Insights and Rimes.

For example, data discovery and distribution often present obstacles for organizations, due to information sources often being siloed. Plus, each investment process relies on data feeds from various vendors and formatting can be inconsistent; this requires data teams to spend a significant amount of time on standardization.

Manually addressing these complexities and reconciling data does not fit smoothly into existing workflows. Firms use multiple data providers, with many data feeds and vendor connectivity nuances. There’s a dependency on methods of data transfer, like FTP, that are disconnected from the processes and this further complicates error handling, often leading to limiting data strategy scale.

Against the backdrop of these challenges, there is an increasing need to make data available for data science and machine learning – and the data must be in a state that’s ready for use, while ensuring there are suitable controls around access and how it’s used.  

Jammed data, stored in a variety of sources

Financial institutions use many data sources and vendors, but data is typically structured differently across sources and is often incomplete, with differing formats and identifiers. Entire teams are required to clean data manually. This process can be time-consuming and expensive, thus limiting data accessibility. Institutional investors who are facing pressure to get their data management AI-ready are confronted with another challenge: the data itself isn’t ready, lacking a common semantic layer to model and normalize it across multiple providers, sources, types and structures. 

As data demands continue to grow, different teams have varying data needs. In addition, financial institutions underutilize the data they currently have, leading to wasted money and resources. Data from many sources must be extracted manually. Financial institutions end up polling servers for updates instead of using event-driven notifications and API. These queries are wasteful, unnecessarily consuming bandwidth and leading to processing delays. Plus, the disparate data isn’t unified.  

Accessing usable data efficiently is a widespread issue for institutional investors given the need for manual extraction and pervasive errors. Firms are increasingly investing in more sophisticated data management strategies, with 60% of investment management firms polled telling Broadridge they plan to increase spending on digital and data analytics in the next two years. 

A better way to manage data

Fusion by J.P. Morgan

Learn more

Given the pressing challenges financial institutions face in accessing and managing their data at scale data for both public and private assets, investing in cloud-native capabilities can serve as a ready solution for consuming and integrating data.

Data Mesh, part of the Fusion solution, enables investors to access data across sources through modern distribution channels including API, Jupyter Notebook and cloud-native channels such as Snowflake and Databricks. Through Data Mesh, investors can simplify their consumption models and accelerate their analytic outcomes and subsequent strategic decisions. 

As a leading asset servicing provider, J.P. Morgan uses its expertise and deep understanding of complex client challenges to offer a better way to manage data. Fusion by J.P. Morgan, a data technology solution for institutional investors, provides end-to-end data management, analytics and reporting across the investment lifecycle. The platform seamlessly integrates and combines data from multiple sources into a single data model that delivers the benefits of scale and reduced costs, along with the ability to more easily unlock timely analysis and insights. 

Addressing key challenges and solving data access issues across a firm’s data landscape requires the right technology and expertise. This is where Fusion Data Mesh can help firms take advantage of the cloud’s elasticity and tap into the growth and rapid development in analytics. 

Podcast: Evolution of data management and where to next

poster image

This episode covers the data landscape’s evolution, how the industry is addressing common pain points and how AI is likely to drive further change in this space. Eloise Goulder, Head of the Global Data Assets & Alpha Group, hosts the conversation with Gerard Francis, Head of Product for Data and AI, and Head of Fusion by J.P. Morgan, and Ashley Peterson from Fusion Sales.

Market Matters | Data Assets & Alpha Group: Evolution of Data Management – And Where to Next

 

[MUSIC]

 

Eloise Goulder: Hi, and welcome back to the latest bi-weekly data-driven podcast on the J.P. Morgan Making Sense podcast channel. I'm Eloise Goulder, head of the Data Assets and Alpha Group. Today I'm really delighted to be sitting here with Gerard Francis, who is head of Fusion, our data platform here, and also head of product for data and AI at J.P. Morgan. I'm also joined here by Ashley Peterson, who represents Fusion Sales and is part of the broader data and analytics sales team. I'm really looking forward to diving into the evolution of data and data management and how this space is likely to evolve from here. Gerard, Ashley, thank you so much for joining me here today.

 

Gerard Francis: It's our pleasure, Eloise. It's glad to be here.

 

Eloise: Gerard, could you start by introducing yourself and your background?

 

Gerard: Sure. I've been at J.P. Morgan a few years now, focused very much on building out a data management platform, in addition to building the product for data and AI within J.P. Morgan. Prior to this, I spent about 25 years at Bloomberg, the last eight of which were running all of Bloomberg's data businesses, spanning the breadth from real-time to reference to research data sets, et cetera. I have a lot of gray hair in the data space.

 

Eloise: So 25 years at Bloomberg. You must have a real wealth of experience in the data landscape.

 

Gerard: Yes. I think a lot of experience was with dealing with our customers, which ranged across the entire breadth of the financial industry with a lot of time spent on the buy side. In that position, I was really able to understand from the ground up what people's challenges were, how they evolved, how the industry itself has changed so much over the last few years, including, of course, building a lot of product along the way.

 

Eloise: Yes, brilliant. If we think about the evolution of data and data usage over the last 25 years, how would you really describe the main changes?

 

Gerard: It's been a dramatic change. I think people historically never really thought about data as being important. They thought about it as being necessary in order to perform a function. The function was always the application. Data was very often designed specifically to be used within a single application. There were some sets of data that were common across the firm, things like reference data, pricing data were used in common, but beyond that, it was very enclosed in a narrow application. Through time, that has evolved. I think we've seen a few waves of that as we go through. I think post the financial crisis, there was a big focus on data from a governance perspective for people to really understand, do you know your data? Is your data of adequate quality? That led to the roles of chief data officers being created. As we progressed, as data science became more of a thing, people realized that you just couldn't have the data and the applications. Then people wanted to access the data within their data science environments in order to do cool things with it. Those cool things have gotten more and more exciting through time. Now, we're at a phase where people have realized that, actually, the applications and the analytics can be pretty simple if one really has access to amazing data to drive those applications analytics. Now the industry is really in a place where it wants access to good data, great quality, extensive depth of history, being able to connect and join them very simply to get to value. Because ultimately, this is a race to speed. The cleaner your data is, the better the history you've got, the more joined up it is, the quicker an investor can get to the point of value.

 

Eloise: Yes, that makes a lot of sense. I guess particularly in recent years, we've seen really significant investment on the buy side in technologists and data scientists and users of data.

 

Gerard: Yes, there's been a sea change. I think many things have driven that. I think one key enabler has been the cloud itself, because the presence of the cloud has now enabled people to do things that were far harder in the past. People have tried to unlock the data put into the cloud. The types of tools and capabilities available today are dramatically different to where they were five years ago. Tools like Snowflake, which make data querying very easy, things like Databricks that have made machine learning very easy, they've come about. Suddenly, the buy side has been empowered to solve problems in a way they could never have done in the past. They've also seen the reward because there are a lot more quant driven funds that exist, a lot more quantitative approaches to find value. As that has increased, people have realized the importance of having the range of data scientists, data engineers, who can take the data, harvest it, create ideas, and then really work with it in order to drive alpha.

 

Eloise:. And when you think about the users of data, how would you categorize them? You were mentioning earlier that post great financial crisis, there was a lot of demand for data in the risk management side. On the other hand, you were just talking about use of data in the alpha generation side for quant hedge funds, for example. How would you categorize the different users of this data, and how has that changed over time?

 

Gerard: Sure. I think if you break it up, the reference data function has historically been very important because, without good instrument reference data, nothing else tends to work. People historically had teams of people very focused in getting the reference data right, which is a very operationally intense area. In addition to that, within the buy side, there was a big focus in the data around positions in order to do performance analytics, risk analytics, et cetera. Then again, they had various teams that were heavily focused on bringing in all the fund accounting data, processing it, normalizing it, before they could do their performance risk and returns, which was really very important for the end investors. Then as we went through the crisis, the focus on risk became important. Again, you had people with both a risk background and a data background focused on really how do I organize the data so I can trust the numbers. As the regulation began to increase, you had more people step in with the role of CDO or the stronger role of governance. They began to look at the processes around the data to understand when decisions are made on the back of data. Are they being made with the right governance? Does the data have the right quality? Has the right process been applied? That is still a continuing effort because the importance of quality and data lineage is only going to continue to grow as people go through that journey. This area or these areas have really been about structured data and very high levels of governance. In parallel with that, the research areas began to realize the value of data. To some extent, they always did because people built financial models through Excel and spreadsheets where they made their assumptions, et cetera. That is an extensive area of the market and will continue to be. In parallel to that, with the new advent of techniques such as data science where people could have folk who had an experience of Python and then be able to do some really amazing things with the data and discovering value, that discipline began to grow. Initially, it started as a cottage industry where people, the folk who had those skills, whether it was data science skills or data engineering skills, and they cobbled together their own platform based on the pieces of technology they had access to. As people have seen the value of the results, they are now doubling down in that area. In that process, they are now investing in much better platforms that can take in data at scale, that are much more structured, where if the data is normalized once, it has reusability. The next data scientist who comes around and has an idea doesn't need to go back to the basics but can benefit from the work that has been done in the past. The work is accretive. Data cataloging has become more of a thing where organisations want to now understand their data better where it's stored and make sure that it is accessible to the data scientist. The skills involved have grown through time. It started off very much in a SQL type of area and in SQL and database, structured databases. Now, we're very much in the space of data science where people use a lot more probabilistic techniques to work with the data. The one thing that is now common is people want data to be widely accessible across the entire organisation. The same data that is being used for reference data and operations needs to be the same data that is being used by data scientists to generate alpha. A very good use case of that is the data in the ESG space, so the environmental space where this data is hard to come by. There are lots of gaps in the data. People need it across the entire spectrum from the research analysts and the portfolio managers who want to make investment decisions based on the values of the numbers, whether it's the actual emissions or it's the projected scores. It goes all the way down to the folk who are doing the reporting and the performance because now they need the data in a highly structured, well-governed, with a lot of data quality applied to the area because now they're reporting back to their clients on their performance based on the ESG metrics. That's how you begin to see data span across the entire spectrum from the front office all the way to the middle and the back office.

 

Eloise: Thank you. I'm really glad you articulated that last piece about organizations wanting to share the same data across different functions. It's clear that historically that probably wasn't the case. As use of data has increased, whether it's at the risk management side or the alpha generation side or the trading side, it's quite obvious in retrospect that they should want to speak to each other. Actually, it's reminding me of a discussion I had with the chief risk officer at Mann Group on this podcast series, Daryl Yarwich, where he described the benefits of having exactly the same data available to him in a chief risk officer function as for the alpha generation side because they can both speak to exactly the same themes. When you think about signals like retail sentiment that is helpful as an alpha signal for the portfolio manager, but equally as a risk management tool for the risk officers.

 

Gerard: Yes, that makes sense, especially now when you extend the concept to large language models and generative AI, where people want to take advantage of those types of areas. The data extends now from a lot of structured data to a lot of unstructured data. People want to have access to all their research documents, to their contracts, to their HR policies, all of it organized in a structured way, all of it current because you don't want to have a generative AI model giving you advice based on dated information. Folk now have to get organized around that area. As we look to the future, it's going to be a combination of both structured and the unstructured data because now you can begin, in the future, to use large language models to work with both your actual numbers themselves, and along with textual documents to provide incredible value to the users.

 

Eloise: That makes a lot of sense. So… can we touch on pain points that all of these users typically have? The user base has obviously grown enormously, but there must be a lot of pain points still out there.

 

Gerard: Yes, I think there are a whole range of them. I think for an institutional investor, their first challenge is their data comes from a multitude of sources. It's not just one bank, it's multiple banks. It's not just one provider, it's multiple providers, multiple vendors. All of them deliver their data through different technologies and different formats. The first pain point is how do I handle this vast amount of data coming to me and yet get it in the simplest possible way so I don't have to allocate my internal resources in order to ingest the data. That process needs to get really simple. The second aspect is once the data lands inside my organisation, how can I cleanse it and normalize it to make it useful because as it lands, it's probably not very useful to my end user? The third challenge is now that I've got the data and, hopefully, it's been normalized, how can I make it discoverable? How can John know the data is present even though Jane was the one who brought it into the organisation? How do you really make the discovery process work? Once you get past the discovery process, how do I give them access? Should this person have access to the data? What's the means by which I can really entitle them so they can be able to easily access the data? Once I've got that data, then how can I easily use it within my analytical application that exists? That's a whole range of challenges that exist. If you fast forward, what people would really love to have is can all the data really function as an Uber warehouse so I can then answer complex problems like give me all the portfolios that I have that are contingent on technology stocks in China, but also have some dependency on raw materials in Indonesia. When one tries to answer problems like that, one is really beginning to scan a vast amount of data, the reference database, the pricing database, the research database, data from vendors about different companies, potentially ESG metrics, and in a simple way, being able to actually generate that result. Today it tends to be very hard because the data very often sits in silos and it doesn't come together. That's another massive pain point that the buy side would like to overcome.

 

Eloise So Gerard, you've spoken a lot about how the industry has evolved over the last 25 years. When we look forwards, where do the industry going? Where do the data management landscape going? Also AI, it's such a hot topic among investors right now,. How do you think data management will evolve with respect to AI and clients leveraging AI?

 

Gerard: I think the first thing that will become increasingly relevant for us is AI itself. AI offers a massive amount of potential for investors in so many different dimensions. I won't dwell on the benefits of generative AI and what people can do with that. What underlies AI and unlocking AI is really the data because the difference between asking a chat GPT a question, they're really going to give you an answer based on public data. Whereas what we really want is you want an answer that's completely customized to all the information that we have. That is a hard problem that cannot be solved unless one really has full control and understanding of one's data. I think as people get excited about AI, but realize its limitations because unless you've got your data all sorted out, AI tends to be less useful. I think folk will double down even harder on solving the data problem. It's not an easy problem to solve, which is why I really think that the type of tools that we bring to bear, we can make a difference. I do think the industry will continue to be on a journey in order to make their structured, unstructured data simple and easy to understand. People, to your point earlier about having the same data consistently, they want to focus on that to eliminate duplication. They can be more confident about the answer. I do think more tools will come about as a result of AI that'll help us spot data quality problems better. That'll be able to resolve some of the issues better. I think organisations will get more efficient being able to work with data. Having said that, the Nirvana is when all your data is really holistically joined and connected with all of the history and being able to understand that. That's what people really want access to and that's something they'll continue to strive for. It won't be an easy problem, but I do think people will focus on trying to solve that. I think AI will move a lot faster than the data itself. Gerard, just last question for you before we turn to Ashley, you seem so passionate about the work you're doing and the industry as a whole. What makes you so passionate about this?

 

Gerard: I've been in this space for quite a while. On the one hand, I really understood or learned from our institutional investors and what their frustrations are, what their pain points are, and what people have really begun to see as a massive stumbling block. Right now, I actually think, based on the team we've got, we've really put together a solution that makes a lot of these problems really go away. A few years ago, I wouldn't have thought that's possible. When you actually see something that can have a massive impact on people and make a big difference, it is very exciting. I'm super happy to be part of the team.

 

Eloise I think this is a great segue into our Fusion platform. So Ashley, can we turn to you now? Can you tell me more about what this really is?

 

Ashley: Happy to. So Fusion by J.P. Morgan is a managed data service designed for institutional investors such as asset managers and asset owners. With Fusion, we're looking to solve the complete data problem, with the objective that data is no longer a blocker for our clients. So, basically, it simplifies the connection between data producers and data consumers. With Fusion, investors can get clean, ready to use data from any source or domain, directly into their tech stack. So this could include their own data, alongside J.P. Morgan data, alongside multiple vendor data. So what are the benefits to this? The benefits are vast. It helps reduce costs, delivering benefits of scale, allowing for analysis and insights in days and weeks instead of months and years. It gives investors the ability to save 90% of their time in the data wrangling process.

 

Eloise: Thank you, Ashley. Gerard, Ashley, thank you so much for being here with us today.

 

Gerard: Thank you for hosting us, Eloise. We really enjoyed the conversation. It's such an important conversation and it's wonderful to both have the conversation with you and through you, all our listeners around the world.

 

Eloise: Thank you also to our listeners for tuning into this biweekly podcast series from our group. If you'd like to learn more about the Fusion platform, then please do visit fusion.jpmorgan.com or reach out to your JPMorgan salesperson. Otherwise, if you have feedback or if you'd like to get in touch, then please do go to our team's website at jpmorgan.com/market-data-intelligence. There, you can send us a message via the Contact Us form. With that, we'll close. Thank you.

 

[END OF AUDIO]

Market Matters

  • Securities Services

    How technology transforms sustainable investment data complexity into an advantage

    March 29, 2024

    The sustainable investment ESG data landscape can be fragmented, posing challenges for investors. Learn how technology can transform data into an advantage.

  • Securities Services

    Securities Services

    Explore thought leadership and best practices on topics impacting businesses today.

  • Securities Services

    J.P. Morgan Data and Analytics

    Access comprehensive market and pricing datasets, analytics and reporting tools, and data management solutions.

FOR INSTITUTIONAL & PROFESSIONAL CLIENTS ONLY—NOT INTENDED FOR RETAIL CUSTOMER USE

This is not a product of J.P. Morgan Research.
J.P. Morgan is a marketing name for the Securities Services businesses of JPMorgan Chase Bank, N.A. and its affiliates worldwide.

JPMorgan Chase Bank, N.A. is regulated by the Office of the Comptroller of the Currency in the U.S.A., by the Prudential
Regulation Authority in the U.K. and subject to regulation by the Financial Conduct Authority and to limited regulation by the Prudential Regulation Authority, as well as the regulations of the countries in which it or its affiliates undertake regulated activities. Details about the extent of our regulation by the Prudential Regulation Authority, or other applicable regulators are available from us on request.

J.P. Morgan and its affiliates do not provide tax, legal or accounting advice. This material has been prepared for informational purposes only and is not intended to provide, and should not be relied on for, tax, Legal, regulatory or accounting advice. You should consult your own tax, Legal, regulatory and accounting advisors before engaging in any transaction.

This document is not intended as a recommendation or an offer or solicitation for the purchase or sale of any security or financial instrument. Rather, this document has been prepared exclusively for the internal use of the J.P. Morgan’s clients and prospective client to whom it is addressed (including the clients’ affiliates, the “Company”) in order to assist the Company in evaluating, on a preliminary basis, certain products or services that may be provided by J.P. Morgan.

This document is provided for informational purposes only and is incomplete without reference to, and should be viewed solely in conjunction with, the oral briefing provided by J.P. Morgan. Any opinions expressed herein may differ from the opinions expressed by other areas of J.P. Morgan.

This document may not be disclosed, published, disseminated or used for any other purpose without the prior written consent of J.P. Morgan. The statements in this material are confidential and proprietary to J.P. Morgan and are not intended to be legally binding. All data and other information (including that which may be derived from third party sources believed to be reliable) contained in this material are not warranted as to completeness or accuracy and are subject to change without notice.

J.P. Morgan disclaims any responsibility or liability to the fullest extent permitted by applicable law, whether in contract, tort (including, without limitation, negligence), equity or otherwise, for any loss or damage arising from any reliance on or the use of this material in any way. The information contained herein is as of the date and time referenced only, and J.P. Morgan does not undertake any obligation to update such information.

J.P. Morgan is the global brand name for JPMorgan Chase & Co. and its subsidiaries and affiliates worldwide. All product names, company names and logos mentioned herein are trademarks or registered trademarks of their respective owners. Access to financial products and execution services is offered through J.P. Morgan Securities LLC (“JPMS LLC”) and J.P. Morgan Securities plc (“JPMS plc”). Clearing, prime brokerage and brokerage custody services are provided by JPMS LLC in the U.S. and JPMS plc in the U.K. Bank custody services are provided by JPMorgan Chase Bank, N.A. JPMS LLC is a registered U.S. broker dealer affiliate of JPMorgan Chase & Co., and is a member of FINRA, NYSE and SIPC. JPMS plc is authorized by the PRA and regulated by the FCA and the PRA in the U.K. JPMS plc is exempt from the licensing provisions of the Financial and Intermediary Services Act, 2002 (South Africa). J.P. Morgan Securities (Asia Pacific) Limited is regulated by the HKMA. J.P. Morgan Europe Limited, Amsterdam Branch does not offer services or products to clients who are pension plans governed by the U.S. Employee Retirement Income Security Act of 1974 (ERISA). For additional regulatory disclosures regarding these entities, please consult: www.jpmorgan.com/
disclosures.

The products and services described in this document are offered by JPMorgan Chase Bank, N.A. or its affiliates subject to applicable laws and regulations and service terms. Not all products and services are available in all locations. Eligibility for particular products and services will be determined by JPMorgan Chase Bank, N.A. and/or its affiliates.

© 2024 JPMorgan Chase & Co. All rights reserved. JPMorgan Chase Bank, N.A. Member FDIC.