
Loading summary
John Bowman
Foreign.
Kristy Townsend
Welcome to Capital Decanted. In this show, we say goodbye to tired market takes and superficial sound bites because here, instead of skimming the surface, we dive into the heart of capital allocation, striking the perfect balance and exposing the subtleties that reveal the topic's true essence. Prepare to have your perspectives challenged as we open up the issues that resonate with the hearts and minds of those shaping capital allocation. We've enlisted the wisdom of visionary leaders in the industry, and just like a meticulously crafted wine, we'll allow their insights to breathe, unfurling their hidden depths and transforming our understanding. This is season two, Episode three of Capital Artificial Intelligence and the Investment Professional. I'm John Bowman. And I'm Kristy Townsend and we are your hosts. As always, a huge thank you to our returning title sponsor, Alternatives by Franklin Templeton. So grateful to have them back partnering with us with over 40 years of alt investing and over 260 billion of assets under management, their specialist investment managers have expertise across six different asset classes. Real estate, private equity, private credit, hedge strategies, venture capital, and digital assets. And of course, all of them operate with the client first mentality that has always defined Franklin Templeton to help prioritize investment outcomes. So thanks as always, Alternatives by Franklin Templeton. We'll talk with them again at halftime, so stay tuned for that. So as of this recording, Nvidia, the computing company that fuels artificial intelligence, is the most valuable company in the world at about 3.5 trillion USD OpenAI, the originally open source not for profit that went mercenary and commercial and then of course gave the world chatgpt, is like an ugly leadership and governance train wreck that we in the front pages can't seem to take our eyes off of. Meta, Google, Microsoft, Amazon, Tesla and of course Nvidia again are expected to spend well over a trillion dollars in AI, R&D over the next few years. It's tough to read a financial periodical or attend an investment conference these days without AI discussions dominating, often to excess. We see sensational headlines like here's the robot that will steal your job or Charges of Disrupt Transform or die. These headlines fill our feeds and they raise fear mongering narratives that channel Isaac Asimov's iRobot or James Cameron's Skynet from the Terminator series. I'd encourage all of you to listen to our blockchain computing session episode last season. That's episode 13. We provide a more wholesome discussion of technology hype cycles, but it certainly feels like we're in the midst of what Carlotta Perez called the frenzy phase, or what Gartner calls the peak of expectations. Now, for context, at the time, I describe those phases as, quote, being defined by fervor, as hype outpaces sensibility, gobs of VC money pour in, and retail narratives take on grandiose conclusions. That seems pretty accurate, frankly, for AI as well. But we also said at the time that historical tech platform breakthroughs always feel outlandish or like toys when they start, and nothing in tech history ever proceeds without first a whole lot of excess. So while we should be circumspect, we also need to be open minded as investors. The tendency to overestimate the effect of technology in the short term and underestimate the long term is famously known as Amara's Law, and it's named after the Stanford computer scientist and Amara's Law. Christie, as you know, is undefeated. We always underestimate in the long term, even though we are skeptical, rightly so, in the short term. So let's be honest. AI processing power provides organizations with massive advantages. Common wisdom suggests machines process at roughly 2000 times the speed of humans. And of course, they don't need lunch breaks. They don't need to sleep or take weekends off. They don't even desire holidays with their families. And that computing advantage has been a gift to the world in powering algorithmic convenience and relevance in our social feeds, in our Netflix recommendations and our Amazon purchases, and now in generative AI tools, just to name a few. But of course, it has a dark and underbelly too. Data ethics, privacy, teen mental health more existentially, its net impact on the value of the human race. Just to raise the drama quotient here a bit, these are deep and very relevant concerns. So this topic is enormous. It's even transcendent across civic, social, business and military spheres. So when Christy and I scoped this, we had to make choices on our angle. So our assignment we've landed on today with AI is specific to the industry we occupy, to our daily lives as investment professionals. Text heavy industries like law and marketing have already begun incorporating tools to streamline their processes. And closer to home, of course, quant shops, hedge funds, algorithmic trading firms have embedded AI in their models. But have we hit a tipping point where use cases and applications of these accelerated computing capabilities are ready to invade the more traditional asset management business? Said another way, just as video killed the radio star in the popular 1979 song, will AI render the investment professional obsolete? That is our challenge today, now here at Capital Decanted. If you're a regular listener. We don't tackle provocative topics just to be provocative or to provide clickbait. Our goal is to more deeply explore complex topics that need nuance and texture, as we often say. But once in a while we seem to get handed an intellectual powder keg like this one. You might remember that we started this show all the way back in episode one last season with the relevance and accuracy of Private Marx. And we also later last season took on Private Capital regulation. And I think you could say that this is the third in this category that likely will cause some division and maybe raise some blood pressure. And we're going to, of course, strive to be calming and informative balm, as we always do, but nonetheless, this is a debated topic and a divisive topic at times. So here's our GPS map of today's episode. Where are we going today? As I mentioned, our scope today can't possibly, nor frankly would we be the right people to wax poetic on the full history of artificial intelligence. However, I'm going to give a brief history of the recent progression of its use in the investment industry specifically, and I think that's important to provide a sense of where we are in the uptake and the acceptance in our particular trade. Then I want to introduce a few categories of use cases that could or may, and I emphasize that define its value and more importantly, its intersection with human intelligence in our profession. Christie's then going to take the reins and play what you might call hype critic. Is there a limit to machines? How do we sift through the real and the exaggerated, the signal and the noise, as we often say. And that's of course based on some of her current conversation, tech advances, fundraising trends, implementation ideas, et cetera. And as always, I should say that I'm excited to hear from her specifically given she sat in this seat, which of these use cases that I summarize, or frankly ones that I don't mention, she finds the most attractive and the most impactful. So that will embody our what we call intro segment. And then after a short halftime, we'll invite our expert guests in for more on the ground perspective. Now for you loyal listeners, you know that we often try to balance an LP and a GP view, and we've done that again for you today. And further, however, for this particular topic, we wanted to make sure we had a range of both belief and progress in this topic itself, and we think we've done that too with these two guests. So it will be a multi dimensional discussion with our guests today, and those guests are Martine Escobarri, the president of General Atlantic, the $90 billion growth equity manager, and Dave Moorhead, the CIO of Baylor University. The 2 billion plus university endowment of the Waco Texas University and what I might call the little engine that could. Dave and his team were recently profiled in the Wall Street Journal as the small endowment kicking the tails off of the much bigger Ivy League endows. So we are honored to have those two guests with us in a little while. So, Christie, with that introduction in place, I wanted to maybe turn to you as a kickoff to our discussion and ask whether there was a moment for you you can remember, a trigger, whether you read a news story, you had a conversation, you saw a case study where the conversation on AI shifted from, I think, what most of us still think about amorphous modeling technology, hyper quant model, to a tool that actually might invade traditional normal asset management. Can you remember that moment? Was it in fact a moment?
Dave Moorhead
It's funny, it's hard for me to think to a specific point. I do remember being captured by the amazingness that was Rentech back in the day when there were inklings of what they were doing behind the scenes. But interestingly, it really hasn't been until the past couple of months when I've actually been researching for this episode, that I started to think beyond the hype, because I see the hype in media and how everything's going to take our job. But then I had to understand the specificity and then bring it back up and realize that there are really great use cases. But I don't think that necessarily headlines right now are capturing those use cases, which is why I'm really excited about this episode, actually.
Kristy Townsend
What about you having sat in the analyst seat many years ago, I think I can't remember the timeline, but I think it was these unstructured data aids or helpers. And ESG maybe is one that's most obvious to me. Weather patterns just as an example. That would not have been something you could source and organize and draw insights from in the old days. And tools to be able to do that are just so exciting. We're going to highlight a lot of these use cases in a few minutes. But I think that was the one that resonated with me because I'm not walking through parking lots anymore in the Target looking at cars, which is the stereotypical job of old for the retail analysts that I embodied for many, many years. But let's take a look. Let's back up and take a look at that history. And then we'll get to use cases in a moment. As I mentioned, the larger arc of AI history is mostly outside the scope of this combo. But we should debunk a couple of myths. AI first of all, was not born with the consumer release of ChatGPT in November of 2022. AI was not born at the 2015 summer dinner at the Rosewood Hotel in Silicon Valley, where when Elon Musk famously raided Google's data science team to create what we now call OpenAI. And AI did not even begin in 2006 with gaming GPU company back then Nvidia entering the AI space with the release of their CUDA platform. AI, by the way, is usually attributed to two computer scientists in the early 50s. Alan Turing, whose Turing tests now define whether a machine can demonstrate human level intelligence, and gentleman by the name of John McCarthy, who had a summer research project at Dartmouth, apparently coined the term artificial intelligence. And of course, in more popular lore, the Department of defense in the 50s and 60s began significantly investing in training computers to mimic basic human reasoning. And by the way, this history, speaking of the Turing test, it does probably force us, which is wise, to define what we're actually talking about, given that this whole space is so fluid and frankly complicated. So maybe a definition here, just to frame the conversation. When we say AI and its various offspring. So large language models, big data, machine learning, deep learning, natural language processing, we're going to use a lot of these acronyms and phrases largely to mean elements or facets of the same thing. What we mean by all of those, that collection of terms and phrases is broadly describing the ability of machines to exhibit human like intelligence and a degree of autonomous learning more tangibly. It's a suite or a family of technologies enabled by adaptive predictive power that dramatically advances humans ability to recognize patterns, to anticipate future events, and to identify nonlinear or what you might call non obvious relationships. So given we're recording this during election week, I thought a fascinating yet somewhat sinister application of these early machine learning models was company called Sim O Matics. I'm not sure if you've heard of this history, Christie, but this was a cold War startup. So way back then, and this group of data scientists built a people machine. You cannot make this up. The people machine was aimed at predicting everything from Vietnam War strategy to election results to probabilities of riots based on White House race policy. And it was the first AI tool that was wielded to manipulate consumers and destabilize politics during both. You're not going to believe this. The JFK and the LBJ campaigns. This is decades, folks, before Cambridge Analytica or the social media heavy campaigns of Obama or Trump more recently. And this captivating and disturbing story, I might say is told in the book if Then by Harvard professor Jill Lepore. And it's a great read. It really portrays the early seeds of the tribal divisions and the fake news poison that dominate today's national politics. So just an aside and a commendment for that particular book. Back to our story here. The history of AI and investment management has a much more abbreviated life. So quantitative hedge funds started employing more sophisticated use of AI tools in their models in the 2010s. Christy just mentioned Rentech, Renaissance Technology, Bridgewater, AQR, Man Group, Citadel, others have long utilized computerized algorithmic models at high speeds to analyze market patterns, run economic simulations, monitor trading rhythms, and that was in order to predict and trade on future price behavior. But about a decade ago, as I said, machine learning and natural language processing were added to this suite, this big monster model to help them expand their data set analysis, their forecasting speed and scenario modeling. Now these models are heavily proprietary to each of those firms. They are guarded like Fort Knox, but they are massively robust today. The other strand here of history around the same time was sentiment analysis products. Christie, I don't know if you remember this, but this started to show up particularly in relation to social media activity, especially when Twitter got really popular with fintwit. You might say I remember beginning to see these products appear in exhibit halls of major investment conferences maybe in the mid teens, 2015, 2016. And given the captive audience of their platform, Bloomberg was an early leader. They launched their product in 2009 and as machine learning has grown up, these products can now tackle other issues beyond social media capture, such as search topic clustering, theme detection, market impact analysis, et cetera. And then finally In October of 2017, the AIEQ Fund, officially known as the AI Powered Equity ETF, was offered by an organization called Ekbot and built on the IBM Watson technology. From what I can tell, this is the first fully powered AI investment product. The Exchange Traded fund was designed to leverage artificial intelligence to actually make investment decisions by using advanced machine learning algorithms to analyze large amounts of data including company fundamentals, market trends, news sentiment and based on the outcomes and the insights of these AI models, this ETF identifies stocks and constructs portfolios that it believes will outperform based on a variety of factors and indicators. AI EQ now manages close to 150 million today, and while there's some mixed messages as I read through some of the news stories. It seems to be outperforming the markets since inception, so we shall see if that gains more traction. So as I just summarize that short history, I don't want to downplay AI's impact on the profession, but over the last decade since this technology has been what you might call consumer ready. The utilization, as you've heard I would say, has mainly been constrained by to large already tech savvy, computer sciencey, black boxy, mostly quant hedge funds that were already running models. Machine learning has just enhanced efficiency, its processing power, its speed of said models. You might remember from our blockchain episode that technologists call this type of enhancing of existing work skeuomorphic, whereas transformative technologies, substitutionary technologies are called native. And we're going to ask our guests whether they think AI will remain in this incremental speed and efficiency category, or do they actually think it's going to graduate to something much more disruptive, a new computing platform, for example. So as we posed in my introduction, the question really about its acceleration and its disruption level is really dependent upon whether we are finally at a tipping point similar to the question I just asked Kristie for the broader investment industry. Have we reached a transition in the life of and the tech hype cycle where some of these tools used by the first movers are now ready for broader acceptance for alpha generation purposes, for capital allocation? And I would say the jury is very much still out and consistent with that hype cycle. This stage of the evolution we should expect comes with a lot of division and disagreement on broader acceptance. Nonetheless, a mental charting of the progression of survey results over the last five years does seem to send a clear signal that at least on sentiment progressing. So let me just give you a couple of those trail markers. 2019. So six years ago now, CFA Institute released a report that outlined some really interesting, what they called, quote pioneers and early adopters in the space. But in the associated survey of their members as part of the report, they found only 10%, 1 out of 10 had used machine learning or other AI tools in the previous 12 months. So not particularly compelling there. Fast forward to 2021, two years later, the investment association this is the trade group for the UK fund management industry. 83% of firms surveyed argued AI and investment management significance will be either very high or high within two years. So that was 2021. So of course the end of that period would have been 2023. Perhaps they were a little over their skis there in their prediction. But clearly we see momentum directionally shifting significantly. And then finally, earlier this year, 2024, Mercer released a survey where they talked to 150 asset managers and they revealed that the use of AI across investment strategies and research has expanded far beyond the traditional quant cohort that we've been talking about. In fact, 91% of managers are either currently or planning to use AI within their investment strategy or their asset class research. So look, this is not a scientific study, these three surveys, but you do see growing self reported and what you might call conviction acceleration that seems to support a view that we may be near that broad based inflection point that I keep talking about. Or as Deloitte recently put it in a report, the Next Frontier of Investment Management. Now, when you use language like frontier, that suggests a much more foundational platform transition than just making existing tasks cheaper and faster. So we shall see. And we leave a lot of that to you. The other factor here that we haven't talked about on this episode is perhaps how what you might say the current state of the industry may provide extra incentive and pressure to innovate or to save cost or to discover more efficiencies. According to a 2024 report by Boston Consulting, brace yourself for this stat. Since 2006, so almost 20 years, almost 90% of industry revenue has come from market appreciation. Let that sink in. The post GFC risk on free money easy button environment for the capital markets has created a Goldilocks reality for the PNLs of investment management firms. And of course this has all changed in a post Covid world with structurally higher inflation and interest rates, not to mention geopolitical uncertainty. And of course at the micro level, wage and other fixed cost inflation, stagnant revenues, a generational shift to passive investing and continued fee compression is in fact squeezing leadership to find other avenues of growth if they can't depend on the capital markets moving up and to the right every year all the time. Will AI be the answer to that? I don't know, but you could certainly point to it as a natural tailwind that pressures folks to rethink their business. So that leads me to the second portion to segue to Christie of what I want to cover in my segment, which is a few categories of alpha generation that AI may be able to provide as far as a benefit going forward. So some of this is being experimented with and others are much more in their formative stage. So take this as food for thought and I'm certainly intrigued Christie, to hear what you think of Each of these and again, even ones that I don't mention. And we'll press our guests on these as well. So these three categories, look, there's no perfect mental map for all of this. And some of this overlaps. They certainly are interdependent and build on each other. But I've organized these, as I said, into three categories. Number one is efficiency, number two is unstructured data insights. And third and finally is recommendation. And these are somewhat progressive in the sophistication, you might say. So efficiency. What I'm trying to get at here is exploiting computing power to what you might call short circuit or fast track what are largely rudimentary, manual and time consuming processes that the typical investment professional is already responsible for. This is mostly wielding natural language processing and voice recognition programs to scrape or highlight or transcribe and summarize things like earnings calls, annual reports, sell side reports, issuer filings, public records, et cetera. Now some of these NLP programs can even directly populate your valuation or earnings model right into your Excel spreadsheet. And of course generative tools could even take all of this now organized data and draft investment memos to give the practitioner a huge head start on that burden as well. So the metaphor here is somewhat akin to a cockpit that summarizes all of this information at the Analyst or the PM's fingertips and it frees them up to redirect their time away from data gathering to actually judgment and higher order strategic thinking around recommendation and insight. What do I make of all this data rather than wasting all my time gathering it, think of it as a machine that is your copilot that augments existing capabilities and exponentially improves efficiency. Quick and unique case study here, just because I find these interesting is American Century. And this is actually from that same CFA Institute report that we'll note in the show Notes. American Century employs a sentiment model that analyzes the language coming from management in its quarterly conference calls. Their deception model, which is really fascinating as they call it, is made up of four components, omission. So when is management failing to disclose key details? Spin, exaggeration, overly scripted language could play a role there. Obfuscation. So overly complicated storytelling and blame or deflection. Now obviously there needs to be significant human intervention here to consider the industry, the personalities of those particular managers. But what an interesting input with a bit of a psychological twist. So fascinating use case there with American Century. So that's efficiency. Second, again progressing here, unstructured data, insights. And honestly Cristy, this is the category I noted in my answer to you that I find the most fascinating the canvas that I think is just so creative. It is estimated that as much of 90% of data is unstructured and therefore not easily accessible, manipulated or frankly even discovered. Yet we sometimes call this big data or alternative data. And unstructured data are sources or patterns from non traditional places. So the most common examples are data sets through web scraping, social media activity, satellite imagery, credit card purchases, weather patterns, I mentioned that earlier, other ESG factors, e commerce activity, et cetera. So these new tools can both capture, sift and organize the alternative data. But they also then can take a step further and through machine learning technique draw insights from these patterns or time series, perhaps even identify relationships across these otherwise randomly isolated data sets. So I alluded to this earlier. When I first started in the industry, primary all data sources consisted of walking through physical parking lots to check out the number of parked cars versus last week or channel check calls to get a read on inventory turnover. Call all the suppliers, call all the retailers. You're trying to get a sense for whether the inventory is moving. Imagine now instead of that, utilizing shipping container tracking and credit card purchases and freight movements where you can paint a much more accurate and holistic picture of the velocity of a business. That to me is just mind blowing. Wellington has built some of these sourcing and synthesis tools for their central research team. But the most fascinating case I'd like to show you on this one was Schroeder's again from this CFA Pioneers piece. So the UN released a report a few years ago on the promise of what they called smart cities. And this caught the attention of a fund manager at Schroeder's and he wanted to explore whether the portfolio could take advantage of this theme of global urbanization. So using again natural language processing tools, they pulled thousands of articles that mentioned two phrases, either smart cities or future, and then they extracted the sentiment, the concepts, the topics and the keywords that were mentioned in the article. And most importantly, they began counting, capturing patterns of particular companies that were being mentioned as either players, actors or beneficiaries of this whole movement. The machine then could use those patterns of words and phrases, and of course those frequently mentioned company names to pull even more adjacent or related or similar articles. So in the end you got this visual color coded map laying out a huge quantity of news articles on the screen. And the clusters in that color coded visual map identified a handful of high potential, small and mostly under the radar companies to conduct a more traditional financial analysis of its business Again, that cockpit, I feel like I'm an ironman laying out this visual map. And it gives you a bit of a massive head start, you might say, on where to go with your traditional company assessment. So, finally, recommendation. Thirdly, this is the final frontier. You're not only hunting down and organizing data for the human, that was the first category. And you're not only providing some insights and relationships across that data you've hunted down, that was the second. But this is autonomously considering that mosaic of information and actually beginning to make independent decisions. So some of the predictive models embedded in the quant examples we gave probably verge on this, as does the ETF product that we mentioned earlier. Now, whether humans can override or veto the recommendations or trade considerations are variations of this theme. But the point is that the machine has now moved from the co pilot to the captaincy in many ways. And this is the category, as we think about our guests today, that led us to General Atlantic. I don't want to spoil the interview with Martin, because this guy is an epic storyteller, very animated and just enjoyable to listen to. But the short version is that they have fed the machine with 20 years of general Atlantic investment committee decision making and votes. And that machine, through both the history and of course, newly presented data from investment memos and elsewhere, now opines at the investment committee table on each prospective investment and provides a supporting explanation about why she agrees or doesn't agree. Now, Martine will probably caution us, I'm guessing that these machines are only as good as the inputs and they're not a panacea by themselves. There is still plenty of noise in there. The human element, the critical thinking, the creativity, the nuanced understanding that is still brought by the five investment committee members that are humans. That still remains essential. But this, I thought, was one of the most compelling early examples of a collaborative relationship between AI and investment analysts that leverages what you might call the collective intelligence of machines and humans. And I might just say, Christy, as I hand off to you, that that is taking cognitive diversity to a whole new level. So those are my three efficiency, unstructured data, insights, and recommendation. So I will end there. I'm excited to hear what you think of those use cases and then the broader hype cycle and trends overall.
Dave Moorhead
Now, I actually really appreciate that and I like the categories that you laid out. So when I go through my thoughts and use cases in a minute, I'm definitely going to pull those categories back in. So thank you for sharing those. And I will also say I Appreciate your specificity when you are explaining that. Because one thing that I've noticed is that I get this sense that when people, even super smart people talk about AI, there's a lot of conflation of words and concepts, there's a lot of using something from one side of machine learning to explain something else. And I get it. It's still an evolving technology, it's still a massive umbrella which all of these terms are under. So when I first started prepping for this, one of the things that I needed to do to build the framework of where everything fits was to think about the different ages of AI. And I'm just going to run through these very quickly so just thinking about it in the broader sense. And I'll also we're probably going to be posting a blog with my show notes in this so people could go back and reference and see my charts and stuff and how I lay it out, maybe even argue with me. But when I think about the modern age of AI, in particular going back to the 80s, that was the start of these three parts. And each of these pieces now then build on each other. So what happens in the 80s is that you have these expert systems, you have fuzzy logic systems, you have rules based natural language processing, you have optimization. And a lot of that now is just basic logic in software is how we consider it. But I mean at the time they were very groundbreaking insights. From there we had the second wave where as companies amass data through the 90s, we started to have an understanding of how valuable this data might be. And that was the rise of machine learning. So we called it big data at the time. Now we use it for predictive analytics and predictive AI and it's mostly model centric. And then third, as data computing power and our understanding grew, it led to this introduction of deep learning through the 2010s and is now transitioning into the age of foundation models at present. So LLMs, if you will. So it's hard for me to call this a hype cycle so much as just multiple waves of peaks and valleys of just building on technology that to John's point, we started in the 19, what, 40s, 50s, somewhere around there. So everything that you read about present day stands on the shoulders of research that started 80 years ago. And again, it's not like one single hype cycle, although individual elements are throughout the hype cycle, if you will. But ultimately, as I think about that and I think about the broader umbrella of AI, I start to look at it in terms of each of These models or building blocks ultimately falls into a what is this artificial intelligence algorithm model used for? And that is broadly predictive or generative as we see. And these aren't clean. John, I think you said this before too. These aren't clearly distinct buckets because then you have things like autonomous driving that uses a little bit of both at times or uses hybrid of the different types of machine learning. So again, it's not about the what type of model are you using. It's the ultimate end goal of what is this being used for? John, to your point about categories, so working and building on your idea of these three of efficiency, unstructured data, insights and recommendations, I think one of my best use cases throughout all of my research that I came up with having sat in the seat is just a retrieval augmented generation system, so a rag. So just imagine a chatgpt like interface, but it's all of your investment, office or your portfolio, research performance, due diligence notes, CRM notes, email board meetings, manager decks, random macro notes that you wrote on a sheet of paper on an airplane, recordings of things and being able to then search that through a chatgpt like interface. And you can do that. Now I actually, I have not built one obviously because I'm not sitting in that role currently. But there's a really great YouTube video on how you can use Meta's llama model to basically then feed all of your documents through and you have a searchable interface for all of your notes. So if you're like, what is the average return of this kind of manager over this time? There's a lot of different insights you could pull from that. Instead of, to your point, John, having your analyst go track down that data and put it into an Excel file. So I would say that that's probably in terms of efficiency and unstructured data insights, I think that is one of the top things you can do. And you can do that with publicly available models. I think you can actually even do it with ChatGPT as well. You can upload all of your documents and stuff and you'll be able to do a very similar thing. I don't know about the security there in terms of if that gets put into their future models or not, but it is definitely doable. So I would highly recommend that. And then just beyond that, I would actually focus on the efficiency side. So process automation, enhanced research and analysis, pulling data from large amounts of text is another one of my favorites. After you input all of your LPAs and DDQs and such into the system, you can then train it on certain types of text to where in these tens of thousands of pages of documents, particularly for public pensions, it'll pull out the fee data, it'll pull out interesting terms, items that you might want versus one person having to go through every single document and pull that out. It's actually even great because a lot of funds use similar legal teams and similar nomenclature. So you don't have to train it on unlimited amounts of data. You can actually do it pretty quickly with a few examples. With that all said, in this third category that you brought up with recommendations, I am much more hesitant about using it for that. And I say that because to date there's not a great, particularly in the predictive side where it's telling you what you should do. There's no actual data in many cases behind whether or not the AI is better. So I'll actually be curious to talk about General Atlantic and the way that they're using it. Because it's one thing to use it as insight, as a recommendation tool or as a maybe you should look at this. And it's another thing to say, oh, the model said this, so we're going with this. There's actually an interesting example of this in healthcare records where one of the companies basically said that they had an algorithm that identified patients that are at an heightened risk for sepsis. And they said it was 80 plus percent accurate. And it turns out it was 80 plus percent accurate because of data leakage. And by that I mean it was actually in its data was noticed when doctors would prescribe antibiotics, which is obviously showing that the patient had a higher chance of having sepsis. So it wasn't actually identifying sepsis patients, it was identifying doctors who had already written scripts for said patient. So I think that that's one of my hesitations and limitations and models. And then I will also say that the problems with data leakage in particular are something that give me pause. And it's just one of the considerations that makes me hesitant on the wider spread usage of predictive AI models beyond like this really narrow set of problem solving. And then I'll add right now it's interesting because particularly in the media, it seems as though there's this widespread belief that there's going to be a linear increase in the benefits unlocked by model scaling. And I get the sense that actually when you look at the research, particularly more recently in the past few months, that the huge step changes that we've actually seen in large language models are actually mostly behind us. So the Step changes are getting smaller and smaller, which is why we're actually seeing this kind of increased focus on the idea of agents. So agents are actually interesting, like an interesting deviation from our broader conversation, because I do think that the development of these specialized agents could be useful in leveraging large language models to aid in the investment process. For example, in theory, you could program an agent to edit your investment memos and then have that agent also write alternative investment memos to the tune of thousands of drafts, and then go in and pick that best memo or the memo that it thinks best fits your needs. I could see where a use case like that in a broader investment office might be helpful, but that's one of the narrower benefits that I could see just at the moment, just in terms of where we are, technologically speaking. I will also point out, I think one of the biggest things that's weighed on my mind during the discussion of artificial intelligence is this idea that data, compute, energy are all still finite resources. So I'm not sure that we're going to see giant increases in model efficiency and reliability without some massive increase in the data and the energy that's required to actually generate these models. So I don't see that happening in the next couple of years. And I remain skeptical, in particular, that artificial intelligence will actually displace humans because of these bottlenecks. And then also because of some broader issues, in particular of generative AI, where beyond data extraction and first drafts, there's only pretty limited use cases that they're helpful, because at least at this point, we've noticed that the model hallucinates. And this is not a new thing. I mean, this hallucinations happened in the first few iterations of ChatGPT. And it's interesting because some of them are absolutely hilarious. So I think a lot of us have heard the example of the lawyer that used fake court cases in a briefing that he actually submitted to court. But Interestingly also on OpenAI's Whisper app, which is this application that you can speak to, and it recognizes the language and transcribes it into text. But they've even seen hallucinations in that use case. And I think it was the University of Michigan that found, when researching Whisper App in particular, and this use of AI, that 80% of their actual text that they pulled out to look at had hallucinations in it. So, again, this is another huge reliability problem that I think is going to limit the broader use of AI and particularly limit the use of its ability to displace humans with all of that Said in closing, my thoughts I'll kind of broadly just list through them. I think the first big thing is that pretty much all algorithmic and tech advantages eventually get competed away. I think that just every other miracle of modern technology, we're probably going to call a lot of this software, and it will likely change our lives and how we interact with the world, obviously, but it's not going to doom us, at least not in the next five years, at least, in my opinion. And I actually, I think it's funny because in John's portion, I think, John, you said something about the CFA report on AI usage, where in 2019, only 10% of managers were saying that they used it. But actually, I mean, if any of them used a spam filter, then that percentage is going to be a lot higher. So I think sometimes we just because we call it software, we don't look at it as a modern marvel of artificial intelligence. We just look at it as, oh, yeah, that's the thing that stops people from taking my money or from doing some sort of scam. And then beyond that, I would say instead of finding the silver bullet that predicts everything, maybe focus on improving efficiency, improving your ability to capture data, improving and augmenting your job, and then from there, to the extent that you have the data and the technological understanding and the ability to limit these data leakages that happen, then go build the system that potentially predicts your best ideas in terms of investments, in terms of recommendations, in terms of portfolio construction. And I'll close with the fact that the mass of flesh between your ears is so amazing, it has taken us, as a species, billions of dollars and more computing power than I think the researchers in the 40s could have ever imagined. Available to be able to approximately do what you do, but not even in the same way. So I would say just use that when you evaluate these claims, because one of the things that always surprises me is people say, okay, well, artificial intelligence can pass the lsat. And we're like, oh, my gosh. But the LSAT is not what a lawyer does, and the LSAT maybe isn't even predictive of, like, what makes a good lawyer. So we do have to be mindful of that. And same thing with mcat. It doesn't make the AI a good doctor to just be able to regurgitate by predicting the next word. So, again, I would say really think about logically whether or not these claims even make sense to you, or if maybe the framing is creating a bias in our broader narrative. And with that, I will Pause.
Kristy Townsend
I think that reminder of orders of knowledge is really important and that the mcat, I think there was a test on the CFA too, that experiment, I should say that, in which they passed the cfa. And you're right, you talk to most employers, hospitals, investment professionals, they're not hiring based upon your CFA pass, your MCAT score because that automatically assumes you're going to be amazing or elite at said job. Rather, it's a signal for grit and hard work and diligence and the higher order work and synthesis and contextual decision making all comes through on the job work. So I think that's a really good reminder. And I don't know whether the machine will ever be able to do that. You mentioned a couple other things.
Dave Moorhead
It comes through interacting with the real world and that's a thing that AI can't do yet.
Kristy Townsend
Can't do yet. You mentioned a couple other things that I think are really important. And I don't want to lead the witness into our final sip where we'll talk through whether we're truly will be replaced or how much will be replaced of the investment professional's work. But this race for infrastructure, the race for computing power to fuel all of this. You mentioned that both energy and computing power is still a finite resource. And I think that's something we should talk to Martin and Dave about. How far along are we still dancing on the knife's edge of the limit of computing power? Or have we put in enough bandwidth, for lack of a better word, for this thing to continue to evolve? And the other thing, I'd say just on the connection or intersection of human and machine work, is that even the purist, Jim Simons, who, sadly, we lost this year, the founder of rentech, adapted the model, tweaked the model, added his own views, thought about using judgment ways that we could improve upon the machine. So there's no one out there, I might say very few, but not even Jim, that would suggest that we can just defer completely to the machine, at least yet. So I will leave that there. And we are now going to move to halftime. Before we get to Dave and Martin, we're going to have a short conversation. Our title sponsor, Franklin Templeton. We'll see you back here in a minute. All right. Welcome back to Capital Decanted. As promised, we are now here with George Stephan, who is the newly named global coo, Wealth Management for Alternatives for Franklin Templeton. Welcome to Capital Decanter. George.
George Stephan
John, thank you so much for having me.
Kristy Townsend
It's our pleasure. And we're extremely grateful for the partnership, of course that we have with Franklin Templeton. I should welcome you to Franklin Templeton for that matter. You're only two short months in from KKR and I know they're excited to have you in this seat. And we're excited to exploit and take advantage of your wisdom both at KKR and now what you're attempting to build at ft. I thought as a result of that we could maybe very quickly survey the state of play in product development in wealth management. It's certainly a priority for you at Franklin Templeton and for many of your contemporaries across the industry. So maybe I'll start. George, both as a function of your experience in previous lives and now from freshly into FT is that we've seen much of the product innovation to date to bring alts to wealth management has been variations of what I would call some form of a interval structure where you've got some level of limited liquidity at certain intervals, hence the name with periodic gates, meaning there's a maximum to it. What do you think we've learned as an industry generally about the advantages and disadvantages of that particular structure?
George Stephan
Absolutely. Really? If you think about the structuring side, I think about five core structures or variations of the interval fund like you alluded to, all of which give individual investors the ability to access alternative investments in an investor friendly way. So you have the interval fund tender offer, non traded reit, BDC and operating company, all of which have similar attributes with slight deviations. But before I get into the advantages, I think you need to take a step back and think about the history of these products. So if you take a step back and think 10 to 15 years ago, there's obviously been this massive evolution in the semi liquid products. The prior vintages or the 1.0 versions of these products were built by less experienced managers. They had high upfront fees, multiple layers of management fees, they didn't value portfolios frequently and they didn't utilize leverage appropriately. But what you've seen over the last several years is large gps have come in and redefined how these products look and feel. So they've essentially learned from a lot of the various mistakes that previous managers have made, which I think has really tightened up the investment access and the client experience. So if you think about today how managers view these products, there's really a couple of core guiding principles. Institutional style pricing, institutional investment access, more transparency and a lot of education. These structures make it easy for individual investors to purchase these products, which has been critical for adoption. There's no capital calls. These structures deliver instant exposure to seasoned portfolios. 1 Eligibility 2 Typically they're offered to accredit investors, in some instances the mass affluent. These products are continuously offered on either a daily or monthly basis. They have low minimums to invest. From a liquidity perspective, they're probably providing liquidity in quarterly intervals, roughly 5%, simplified tax reporting and a more frequent and robust valuation process.
Kristy Townsend
Really helpful and I think listeners will know that Episode two of last year we outlined much of what George is saying as far as the pitfalls and some of the challenges that liquid alts 1.0 faced. And it's so great to hear that what you've just articulated validates our experience that some of those lessons are being applied. So I appreciate that overview, George. And so maybe part of this is addressed in some of the solutions that have been based upon what we learned and where we made mistakes the first time round. But as you look, you look out. So that's a bit of a retrospective, George. If you look out a few years and you think about whether it's Franklin Templeton's own product ambitions or other activity and velocity you see across the industry, where do you expect to see the most innovation? Either in product spec for the current interval variations or in asset classes or some combination of both.
George Stephan
So maybe we start with the structuring question. Then we could go into where the puck is going from an asset class perspective and where the puck's going from more dynamic portfolio solutions. I mentioned the five structures before. I think what you're going to see in the future here is a lot more focus on the pure play, interval fund and tender offer. You're going to see them predominantly pop up. The reason being is they trade very similarly to the mutual funds, so it makes it easier for advisors and clients to do the business. That is no subdoc. It's click and purchase. They sit on the mutual fund chassis. More GPS are going to try to offer solutions to the massive affluent audience, and it's probably the ideal and best way to do that today. I think you will also see fintech companies continue to make it easier for investors and advisors to utilize the other core interval fund structures. But I think from a pure play structure and perspective, you're going to see more interval funds and tender offers pop up. On the asset class side, what I would say is real estate, equity and direct lending are the most established asset classes and investors are going to continue to allocate to them. Real estate has had a slowdown over the last few years given all the obvious fundamental reasons, but you will probably see an uptick in demand over call it the next one to two years as fundamentals get stronger. Direct lending and private credit more broadly will continue to be high in demand and grow at a fast pace. What you are going to see in private credit, a lot of managers are going to start to add additional standalone flavors such as asset based lending and real estate credit which are a nice complement diversifier to direct lending and another way to essentially get 9 to 11% yield on the PE and infrastructure and PE secondary side. What I would say is you're still seeing a lot of entrants come into the market and those are probably three of the fastest areas of growth over the next few years. At Franklin we are extremely excited about the opportunity in PE secondaries institutional investors need for liquidity given the slowdown in broader exits is a very real theme and I don't think there's been a more exciting time to be an investor in PE secondaries. In infrastructure, it's such a unique asset class and one that's been under penetrated by individual investors. The way I think about true private infrastructure, it's resilient to economic shocks and it has low correlation to other asset classes. So it has a real need within a portfolio. As I mentioned, it's not an asset class that historically individual investors had access to. So I do see a lot of demand coming there the next. And this is going to be a common theme for years to come. It's this notion of public private accommodation fund. You've seen a couple of things in the news as of late, but essentially a combination of public and private securities, predominantly credit driven for now, wrapped into one portfolio feels like a trend we're going to see over the next couple of years. And going back to the structuring point, I think the interval fund structure is probably the most appropriate structure for this today because it allows you to have a higher allocation to privates but still go down to the mass affluent. I think we're a few years away and regulatory easing and relief from being able to facilitate privates and ETFs more than what we've seen. What I would say is there's still a large cohort of advisors who have not yet adopted alternatives. Think about a multi alternative solution where either through a managed solutions or through one offering, you can provide that cohort of advisors the ability to invest across a broad pool of alternatives across every single one of the asset classes we've talked about. I think there are going to be managers that figure out how to create these model portfolios or how to create a product that delivers one holistic solution across all asset classes in a simplistic way.
Kristy Townsend
George, I couldn't agree with you more on that last point. The multi manager solution, I think, is the holy grail. Of course, that's a whole lot easier said than done with all the mismatches and toggling between underlying asset liquidity and, of course, the liquidity gates that are on the overall wrapper. But we'll leave that to the product experts. But that's been a fascinating tour of where we're going. I think if nothing else, as we've seen from the tsunami of news flow that you alluded to, we're in for a lot of change and a lot of new opportunity and innovation. So, George, thanks for helping us think through this and listeners. We'll be back with our guest segment.
George Stephan
John, thank you so much for having me.
Kristy Townsend
Well, welcome back to Capitol Decanted. And we now are, as promised, in studio with Dave Moorhead, CIO of Baylor University, and Martin Escobarri, who is president of General Atlantic. Gentlemen, welcome to Capital Decanted.
John Bowman
Thank you.
Martine Escobarri
Thank you, John. Thank you, Christy.
Kristy Townsend
Well, as you know, guys, in the introduction we talked through a bit of the history of AI, particularly as it relates to investment management over the last decade, which is really when this has accelerated. We even channeled one of my favorite songs back from 1979. Martin I don't know if this was big in Bolivia, but did Video Kill the Radio Star? Is kind of the theme we're channeling for this particular episode, which is does AI Kill the Investment Professional? So that's really in somewhat of sensationalist language. What we're trying to tackle today is what will be the peaceful coexistence of the human and the machine as we think about our industry in the future. So that is our job today. So we're going to have a wholesome discussion and we're grateful to have LPGP perspectives, to have folks that have thought a lot, a little and have, I'm sure, very important, captivating views on this. So, Dave, I'm going to start with you, if that's okay. Perhaps this is semantics, but last season we interviewed Chris Dixon, who I'm sure both of you know, over at A16Z on blockchain and its computing power and whether that's going to be the next platform. And he distinguished the way technologists often think about new technologies. He said there is a category that really extends existing technologies. It makes it easier. It Makes it faster, but they're incremental improvements to help existing processes. And he called that skeuomorphic. But then he also said, and think of mobile as an example of the second one. There's these pioneering transformative technologies that really substitute, that shake up the whole world. These are novel. I mean, these really change the way that we do things completely. So Dave, I'm interested as a starting point, before we get into some of the specifics, where would you put AI tools on this spectrum as you pull out your crystal ball? And why would you put it in one category or another?
John Bowman
Let me start by saying I'm not a technologist, I'm an allocator. So whether it's a 16Z or Martin, they're likely to have better idea than I at what exactly works and how it works, et cetera. But as we step back and looked at AI, I think there's been some commentary in the press about perhaps it being as impactful or more impactful than the Internet. And we agree with that. The way that we talk about it internally is that the Internet basically provided a bulletin board that people could go stick notes you'd see walking around college. There's at least college when we were there, there were push pins with papers hanging at the note and company A has their direction about how to use their products on there, Company B has order forms on there, et cetera. But thinking is actually a uniquely human trait. And to date, like we don't see that in the animal kingdom, we haven't seen it in technology. But it appears that we're moving into machine learning mach capability to actually do some of the things that humans do. So I think we're in the nascent stages of that. We had ChatGPT 2, 3, 4. There's a 10x productivity capability increase with each version. So I suspect that as we go forward, yeah, I think that ultimately it will be life changing. I think that it's not right now. And that's where investors are trying to sort out what's going to work and not. But yeah, we think that it's a whole new ballgame.
Kristy Townsend
So we've got a vote. Martin for novel from Waco, Texas. I want to continue that, maybe ask you a similar question in a different way. And we spent some time, both in this episode, in the intro and with that aforementioned Chris Dixon episode, talking about the tech hype cycle. It's something I heard you talk about as well. And you have articulated this as these hype cycles, while painful in the moment, are also important and critical because they mobilize resources, they accelerate investment, even over investment that leads to consolidation and rationalization. I think I've heard you say they make the future happen faster. These tech hype cycles, as hard as they sometimes are to fathom. So as you think about this in the context of previous hype cycles and technological platform turnover, how would you compare AI?
Martine Escobarri
This is my third hype cycle in my adult life. So right as we were graduating, we had the beginning of the Internet. And that was massive and transformative. And I agree with David, it started as a clipboard, but it really did touch most of us in ways that were incredibly exciting. About a decade later, we had the cloud and the mobile Internet and another cycle begun. And it was equally transformative. This is the third one, and of the three, I think this is the most transformative because it will touch all corners of the world, it will touch all industries, it will change most of the activities of businesses in ways that are perhaps yet to be fully understood, but are going to be completely transformative. The quantum of capital that's necessary to develop this new technology is gigantic. Just ChatGPT 5, which is the fifth version, will cost between two and three billion dollars to train. And this is incremental learning based on the previous four. ChatGPT. This is just one of the players. The data and energy consumption of all these new players, developing models and using models, is projected to represent 10 to 15% of all global energy consumption within a decade. These are big, big, big, gigantic numbers. In every cycle we get excited, early adopters flock. It attracts a lot of interest, attracts a lot of capital. Always it's been true of the previous three and it's probably going to be true of this one. There is a period of over excitement and over investment. There's a period of unfulfilled expectations because we all thought that AGI was upon us and drugs were going to pop out of the machine without much human interaction. And we are scared that the machine is going to lead to an extinction of humanity. And that doesn't happen in the first five years. So we get disappointed. And then there is a correction and it leads to some pain and blood. But out of the ashes of that correction, the reality of it is transformative enough that when looking back at it, we'll say, wow, that was great. What's different about this one? First is the quantum of capital and the globality. Second is the speed. And I'll give you a few numbers so you can feel the acceleration of speed. It took Facebook four and a half years to reach 100 million users in the old Internet. And we were Facebook users. Christy maybe is an Instagram user my daughter likes neither of them. Instagram halved that a little over two years. TikTok did that in nine months, so halved again. TikTok, which is turbocharged mobile nine months. ChatGPT did that in eight weeks, got to 100 million. So it is a massive tsunami that is happening in eight weeks. To get to the tipping point of critical scale, it's more capital, faster. So buckle your seatbelts and yes, there's a lot of hype and there's going to be a lot of blood, but at the end there's going to be a transformative change.
Dave Moorhead
Now I appreciate that context and I will add though I was in college, I think it was my freshman or sophomore year when it was the Facebook and you had to be invited by college. And because UT wouldn't let their students on, Facebook was open to every other university in the state of Texas to force UT to adopt. So I have one of those weird Facebook profiles that started in 2004 which makes me feel incredibly old. So yeah, Facebook is over 20 years old, kids. So with that though, I would like to switch to how we think about using this. So Dave, I'll start with you. It seems like Baylor has a top down approach that identifies strategies or markets first versus being bound by a broader asset class. But then it sounds like you layer in a more bottom up focus on due diligence and then personalization from the team. I will say, having been on that side, it seems like with limited resources and manpower, that's a lot. So have you thought about how AI tools might really help and maybe sourcing or due diligence or parts of that process? And where have you thought about incorporating these and where they might assist?
John Bowman
Yes, we have thought about it, but we haven't implemented anything. The bits that I think that we've run into, or at least the use cases that people have given, have been more along the operational lines. So say you feed in the subscription docs or the OPM that you're pleased with and then feed in all of the other docs and see where they're different. And can you negotiate terms with the manager to make sure you're not missing anything, to make sure that they're aligned? Our perspective is that what we're doing probably does not hinge on what fees we're paying or what the liquidity terms are or the nuances of the verbiage that's in the documentation. We do things from a top down perspective and I think that that is more discretionary than systematic. So for example, at the moment I believe we don't have any systematic strategies in the portfolio. That's not to say that they can't work for other people and whatever, but in terms of how we do things, in other words, if South America potentially had a crisis or something and the markets in South America declined in value and we can witness that, we can see it on the screens, then we would be immediately pushing money into that space. So from the standpoint of an execution on our investment approach, it's not clear to me that there would be any particular benefit that AI could provide that we wouldn't already see on the screens. It could provide something, say in very niche markets, of course, maybe something that wasn't on our screen, but we also don't invest in those markets because of the lack of liquidity. So by how we put up guardrails to make sure that we have liquidity and that we can get out. And also from a top down perspective, we haven't incorporated AI into our approach. And then I would say that in general, the other bit that AI has been talked about is in sourcing, but we're actually trying to go the other way and have more concentration rather than less. So that works against the whole AI bit as well, because we, if we're going from 80 managers to 60 managers, we can simply move on from some managers, et cetera, and we're happy with the ones that we have, for instance. So again, we're not the early adopters of anything. We tend to be a highly suspicious group that wants to let other people try it and see if they come back alive and then we'll participate. So maybe we can pass it to Martin and he'll tell me all the things that I should be doing and I'll say maybe we should talk again in a year and see if you're still alive and then we can go from there.
Dave Moorhead
Well, building on that, Martine, one of the major enabling dependencies of this entire narrative is the race to build out infrastructure and computing power. Is this still a race though? And I ask that because I would like to know, where, in your opinion, are we with this journey? How do we overcome it? And then also secondary to that, as a growth investor, how are you approaching AI investing and kind of evaluating new opportunities with the tech?
Martine Escobarri
We've learned from previous cycles that the early stages of the cycle are very risky because the winners are not yet clear and the valuations get Way ahead of them. And the infrastructure phase of the cycle is the most capex intensive. So there's a lot of dead bodies of brave young funds that are pioneers in the infrastructure and pioneers in the emerging business model. So our strategy at every cycle is to meet everybody and wait until we have clarity that necessary infrastructure has been set and the winning business models are beginning to emerge. And hopefully the valuation bubble begins to crack so that we can buy things at reasonable prices. So we've been talking to everyone for over three and a half years. There are three types of investment opportunities. You can build data centers and energy providers to the data centers, provided you have offtake contracts that guarantee you demand, regardless whether you end up with overcapacity. That's a good area. And we have a sustainable infrastructure sleeve within the four products that we offer to endowments and institutions and families. And that seems okay, provided you've got locked in demand. Then there's the other type of infrastructure which is actually building the chips and building the large language models where we're talking billions and billions of dollars. We're talking about a dozen of players trying to achieve supremacy. You're talking about different approaches to building these large language models. Some people want to boil the sea and create the AGI. Some people say, forget the sea, I'll just boil this pond, just a good part of the sea, and I'm going to get you better, faster dancers with 1/100th of the investment. And that's all playing out to play. It's $10 billion table stakes. We don't yet think there's enough clarity to play in that. Table stakes are too high. And there's a third layer which is the application layer. People that are leveraging someone else's data centers, someone else's LLMs to build new applications that leverage AI to render services that are new and different and innovative and differentiated relative to those that are not. Embedded AI into the application layer that long term is the most exciting pond in which to fish for growth equity. It is thriving. There's two problems right now with that. One is ChatGPT incorporates most of the functionalities that you've been developing in the next version of ChatGPT. So you get rolled over by one of these gorillas that is constantly innovating and creating new functionalities. So that's that. And but more importantly, because you can protect yourself from that, it's venture stage companies with growth equity valuations. 3, 5, $10 billion revenues. 2, $3 million a thousand times revenue. It rarely is a fair price for a young company. So it's still a little early. However, we are finding some opportunities in the third layer. We can give you some examples, but I think the third layer, the third bucket of investing is beginning to become attractive. Valuations are becoming more attractive and companies are becoming more proven. And after three years, we just did three investments in AI application companies, which we think are going to be great.
John Bowman
Yeah. And if I can pick up on that. So what I was talking about before is what we're doing at Baylor. But in terms of investing, we do have investments in the ground. In the first category, which Martin was talking about, we do not. In the second category, and then in the third category, we would rely on managers like Martin or VC or growth Equity or whomever. We don't have the wherewithal of sorting that out. We can see the first one and we can make investments there. And we have along the lines of what Martin is talking about, offtake agreements, et cetera. So we're bullish on that. The second one, like Martin was saying, seems like a battle and someone will win, but we don't really know who. And the third bit is then there's a million companies and we have to rely on somebody like Martin to go figure out which three of those million are going to work.
Kristy Townsend
Dave, I think Christie, compress you a bit more on some of those use cases in a couple minutes, but I want to stick with Martine just on one more subject. And by the way, speaking of dead bodies, now, as I've admitted on this show, Martin, now that I've turned 50 and I'm in therapy on my mistakes in my early adult life, I was one of those dead bodies that felt like the gorilla rolled over me with the Webvan recommendation. At 23 years old, you know, that was one of my shiniest moments. And as I've often said, maybe I was just 22 years too early versus wrong. But hey, we'll let history make a judge of that. But thanks for picking that scab nonetheless. Speaking of use cases and the way we've been pushing, Dave, I want to ask you about the use case that we hinted about in the intro. That was the catalyst for me reaching out to you at first on this particular subject, which is your friend Ada. And Ada, of course, is a new voice at the table of the General Atlantic Investment Committee. And as I understand it, and I'm going to let you tell the story with your typical South American pizzazz that I've grown to love. But 20 years of data you've fed this Machine. And she is now speaking and influencing at the investment committee table. So talk us through that history and maybe what you've learned, what pitfalls there have been, and maybe a couple examples of how she's played a role in your decision making.
Martine Escobarri
We are a growth equity investor. We manage about $100 billion. We've been around for 43 years. We have scale. We help companies in many ways. Over the last three years, the most popular way in which we're helping our companies is to embed AI into their core operations. And there are lots of things that are working really well. We can spend time talking through that. But we challenge our CEOs to think carefully how AI is going to dramatically change the way you compete. You have to have an AI project. And we looked at ourselves and I said, how is AI going to change the investment profession? And we said, let's try out using AI as a co pilot for the investment committee, which is the most critical function within the private equity fund. You decide where to deploy your capital based on a universe of opportunities. So we've been training the machine over the last three years with 43 years of investment outcome data on this 560 companies we've invested over our lifetime. We've trained her with 15 years of voting records of our investment committee members, which are historically, for the last 15 years have been our best investors who vote a certain way on the app. Used to be on email, but now it's on the app. And they reason, they justify the reasons why they voted a certain way. And we have this data and so forth. Now that we run our investment committee, it's open to all investment professionals. So every Tuesday for five hours, we go typically through three to five opportunities that are being discussed. It's open to everyone on Zoom. It's a great teaching moment for our young investment professionals. And then we go into executive session where all the partners are invited, the investment committee members on video, the rest are in listening mode. We vote on the app and we show the votes at the same time so we don't influence each other. And then we discuss and try to change each other's minds. Because two negative votes will kill a deal. When Adil's being killed, we try to see if there's some convincing to do. For the last three years, we've had the computer vote and justify her vote. We named her Ada. It was crowdsourced. It was someone in our team who wanted to honor Ada Lovelace, which is an incredible woman born in 1815, one of the first computer programmers in the world. And she threw out this bold statement that machines could do more than simple math. And it took about 150 years for that dream to come true. But now it's very clear the machine can do more than simple math. So ADA has been voting for the last three years. She's more negative than us, so she says no a lot more than we do. She's 100% fact based. So she will say things like, we've never seen a software company grow margins so quickly, ever. And you're like, we could say this plan looks aggressive. She says, I've looked at 500 data points and it's never happened before. I mean, maybe this time is different, but probably not. ADA is not persuaded by charisma. ADA doesn't care if you just did a home run deal and therefore you're walking on water when you bring your next deal. ADA is cold. ADA is fact based. She speaks truth to man to person. Now we back tested ADA to see if we had listened to ADA over the last decade, would we have done better? We would have done significantly better if we had just followed ADA on her decisions.
John Bowman
Is that because you would have made more right choices or because you would have avoided more wrong choices?
Martine Escobarri
Both. More the second than the first, but both are true now. What's the catch? ADA has been trained on the past. She's very good at predicting the past. The true test of ADA is as we make decisions concurrently with the human. Are you better at predicting the future? And we won't know. As you know, David and John and Christy in private equity, it takes five to seven years to know if you were truly right because mark to markets are nice, but cash on cash is right and it takes a while for that to do. I suspect she's going to be less brilliant than in the past, but having her as a co pilot I think is making us think slightly different. And she's likely going to get better with time.
John Bowman
One more question. Did you train her on the deals that you invested in or did you train her on all the deals that were up for the committee to vote on?
Martine Escobarri
ADA 1.0 just has the data. What we actually did. ADA 2.0 tracks the things we said no to and we should have not done it. ADA 3.0 is in a partnership with some of our friendly LPs and say, we'd like to use your data as well. And ADA 4.0 is going to talk to us and interrupt us and say, I don't agree with you, just said so as a development plan. I don't know if we're going to get to ADA 4.0, but we're playing with it. And it's part of that playful experimentation which I think gives us sensitivities of the challenges of embedding AI into complex processes. It's not there yet, but there's enough there that I'll retire before ada, let's just put it that way.
Dave Moorhead
I was going to make a joke about ADA taking over all critical infrastructure of the world.
Martine Escobarri
I think she's a co pilot. This idea that computers are going to replace humans, I don't think it's correct. Pilots in airplanes have been using idle pilot for 30 years. We still need a pilot, but 80% of the time it's the machine driving the plane. But you need the pilot for that critical five minutes.
Dave Moorhead
Absolutely. Couldn't agree more. And Dave, having sat in a seat similar to yours, obviously you said before that you're not quite doing what Martine and their team is doing or really integrating any of it into your process yet. But it does seem to me that the lowest hanging fruit in terms of use cases would be improving efficiency or developing some sort of unstructured data insights. So building a retrieval augmented generation system, rags, if you will, for your personal notes or data summarizing, investment committee or other meetings, taking first stab at drafts of communication. Do any of these actually resonate with you or do you think that they.
John Bowman
Would help we run things so differently? For example, you said your notes and commentary. I don't take notes, so that doesn't work. And in general, our offices run from the standpoint of if you're not going to get a 10x return on what you're doing, then stop doing it. I know a lot of people in the industry that will go to conferences and then they come back and they spend half a day or more inputting their notes into some system. So obviously if you were doing something like that, then yes, please address that because like you said, that's low hanging fruit. That's fruit on the ground, molding and dying. But we try to dispense with a lot of that just in the structure and approach that we take. So a lot of that low hanging fruit that you're referring to in our particular case doesn't particularly apply.
Kristy Townsend
Martin, you described the end of the process with ada, meaning the decision point of the investment committee, and maybe ask you to go back upstream a little bit to each of the small decisions that interact and compound before a potential prospective deal even gets to that table where ADA is Going to slap you around a little bit and tell you why you're wrong and why you're too emotional and so forth as you talk through. Are there other tools at the more micro level that your partners or other investment professionals are using along the way? Other use cases that are helping them strengthen the pitch that goes to ADA and the five humans.
Martine Escobarri
AI comes into our process in three ways. It's affecting how we find companies, it's affecting how we build value, how we decide whether to invest in those companies, and how we build value in those companies. So we talked about the middle bucket. Let me talk about two other buckets. We buy 90 data sets from the universe and we use AI to produce 19 dashboards of interesting companies that are popping up that merit someone giving that company a call and getting a little more information than that which is available in data sets that we can buy in the market. Any given year. We meet 12,000 companies, we do due diligence, superficial due diligence. Get under NDA with 2000 companies and we'll invest in 20 to 30. So we're saying no 500, 700 times for every yes. Where do we find these companies? We use big Data augmented by AI. Things like we have a robot that scrapes LinkedIn. If the number of people who say they are working at Baylor goes up 500% month on month, we're like, we're going to call those guys at Baylor. I don't care what they're doing, but they sure as hell are growing because they're hiring a lot of people.
John Bowman
That would be a problem in your case, perhaps.
Martine Escobarri
But if you were selling widgets, perhaps not a problem. If app downloads go up, if credit cards receipts go up, if mentions of you in bulletin boards go up, it triggers something that makes us call. So we're using for sourcing in terms of value add. This year we'll do about 500 projects. AI projects in the portfolio. What's really working? Implementing copilot to augment productivity of computer programmers. Check. Using AI to augment productivity of your digital marketing. Why Humans do AB testing. That's how they improve their online marketing. Machines do AZ testing faster, cheaper, sooner. The third one, customer service chatbots are much more efficient than humans at solving whatever it is you need to do with your travel agent or with your bank account manager or your wealth manager. And the fourth one is enhanced decision making. Making credit decisions, making predictive churn decisions, making supply chain decisions about the optimal level of inventory of a particular product based on the demand curve that you're seeing the computer can give you insights to make those decisions better. So we're using it across the three dimensions.
John Bowman
And I think that there's a lot to be said for what Martin began with in terms of scale. So General Anik obviously is one of the gorillas in the space. We have five investors here and endowments in general, their expense ratio is less than 20 basis points a year. Somewhere between 10 and 20 basis points a year. Even if you took UT, UTIMCO or Harvard management that have the largest endowments and you apply 10, 15, 20 basis points to that, you still don't have the resources that Martin has to push against the challenges that exist in trying to get something that's workable. So we'll sit back and talk to companies and come up basically with the same four categories that Martin just said. War. These are the areas that AI really works, really works in healthcare, with claims management, et cetera. It really works with coding and assisting with that, doing it faster. It really works in customer service and potentially eliminating the need for a human. These are like what you were talking about, Christie, in the corporate world, the low hanging fruit. And I think obviously we see anecdotes about this all the time from companies. That is where AI is being deployed first. Now I also think that that's Internet 1.0. That's what everyone sees now. And the question in the marketplace is, yeah, but how much benefit do you get if you replace your customer service staff or unit or whatever with chatbots? It improves margins, but it doesn't potentially grow revenue. It actually does, but it doesn't grow revenue by 200% or something like that. I think the push, pull in AI use case and benefit to investor is everyone sees the potential, but the use cases today don't make full use of what that potential could be. And that doesn't mean it's not going to happen. To Martin's point, the first big cycle was the Internet. And in 2000 we were told that we would be able to download any movie, whenever we wanted, wherever, on any device, whatever. And then it took 15 years for that to happen. And I think that AI will happen faster, but it will still take some time. Even if you go back and you say autonomous driving, there were a bunch of people in 1718 who were saying this is going to be fully online. Way to go. Insurance companies are going away in 2020. Well, we'll get there. Waymo's making strides and et cetera, but it's still going to take a while.
Martine Escobarri
Can I react to two Things you said, David, first, the comment around. You need to have scale to develop these capabilities. We're 100 billion. We barely have enough scale to be able to afford the infrastructure we think is necessary to have to create alpha. You barely have enough scale to cover the number of GPS you need to cover. And the pressures on you are so high. So the rise of the minimum efficient scale and how stretched you already are will lead to massive consolidation. In my industry, we have 6,000 GPs in the United States. By the end of the decade, we're probably going to have half of that. And 10 years after that, 10% of it. We are in the early stages of the consolidation phase and it's going to change our industry and force us to get a lot better. But I think this issue, the rising minimum efficiency of sale is one of the factors. And you being very spread thin are the two factors that are driving consolidation in the world. And your second point about Waymo and an interesting story about how we really are in the early innings of understanding AI and developing the capabilities of AI. We do a lot of investing in life sciences as well. It's the second most exciting trend currently in innovation. But Waymo, to decide whether to stop at a stoplight when a baby trawler is Moving through needs 10,000 pictures. And that's why it has all these cameras. And with 10,000 pictures, the computer says I must stop. The human brain, which has a million years of evolution in the making with 10 pictures, gets to the same decision with a lot less energy and a lot less speed. So there are things in the way the neural networks work biologically that we are so far behind to understand how to do it digitally. But within a decade, those two will converge and AI will become 10 times, 100 times more powerful.
Kristy Townsend
Martin, maybe I'm going to give you the last word here. One of the elephants in the room, you've talked about a lot of gorillas, but we're going to talk about the elephant in the room that we haven't really touched on is data privacy, data accuracy, overfitting, to use a buzz phrase. Despite all these exciting future use cases and potentials for efficiency and the machine and the human singing together and improving investment outcomes, how do you make sure that this moral and ethical issue is not breached? In the meantime, how do you balance those two very important elements?
Martine Escobarri
This is a complicated issue. How much time do we have now? Regulation in time will protect privacy. It has to. Regulation needs to be global. It is not enough for California to regulate. If California regulates and New York Doesn't. They're all moving to New York. If America regulates, but China doesn't, we're all moving to China. So there has to be a global regulatory framework which will take time to develop. On the way there, we're running the risks with deep fakes. Increasingly, the AI is breaking into your password, the AI is breaking into your biometrics. The AI will soon create an avatar of John. I actually have one in my phone. One of my companies created an avatar of me. I showed it to my wife. Look what I just did yesterday. It wasn't me, but it looked exactly like me. That's really scary because in the post Covid world, a lot of what we do is digital. When was the last time you went to a bank branch to do anything with the bank? You did everything over here. And if I can pretend to be you better than you, I can take all your money. So that's really, really risky. And that technology is evolving very, very well. Gladly. They are the white knights. There are companies and we have a couple in the portfolio. We have one in Brazil that's just really, really exciting because it's the white knights, the antivirus of deepfakes. Those are developing and those need to have more resources and more capabilities than the guys working on the deep things that's coming. The third one is the one I worry more about because it's hard to predict which way it will go. You made reference John about how the music industry changed and so forth. Let me tell you a story of two industries. One was decimated, one was empowered by a cyclical, a new wave. The newspaper industry was destroyed. Between 2000 and 2020, number of subscribers in the US went from 56 million to 24 million. Half ad revenues went from 50 billion to 10 billion. 80% reduction. Quality of journalism, diversity of journalism. In that period, Facebook and Google went from 0 to 90 billion in ad revenues without producing their own content. Just aggregating content that was produced for free by the newspapers. That's not a good precedent. We blame social media for disinformation and for division. In this country. The death of the newsroom is also at fault. We decimated the revenue model of newspapers. Now the music industry faced the same challenge with the naspers of the world. You no longer need to buy your CDs or your tapes or your laser discs. Christy Laserdisc. You and I and David Elvis. It saw initially in 2000 when this digital distribution started, overall music revenues dropped 50%. But then we figured out a model where we could distribute digitally and reward the producers of music through the Spotify and the Apple music and the new platforms who were paying per click. And we were all gladly making sure that we were paying the producers. And last year we're back to $30 billion. So it took us a decade, one decade to drop and then one decade to return and growing 20% plus. And music is alive and well and thriving, whereas newsrooms are dying. The same is true about these large language models. It's being trained for free on all these content producers in the world. I hope we figure out a way that this is not the final nail in the coffin of the newsrooms and the influencers and the experts that are writing things on the Internet and we find a way to like the music industry did, that we can ethically share the spoils of this new technology.
Kristy Townsend
And I think that last statement, Martin, is such an illustrative, symbolic statement to show that we are at the height of this frenzy that we all know a lot of rationalization, a lot of sensationalist narratives need to sort themselves out. And just as we've seen in the last two as you described, there will be some big, big winners and a whole lot more losers. And I think that is what we're all trying to figure out what I did here and I thank both of you because you're two of the most intellectually curious and self reflective professionals in the industry that I know and Dave's a perfect example of that. We were just talking offline about his social media feeds. He is teaching the world about not being entrenched in his thought process, being open to new ways of doing things, leaving behind things that don't work. And that's exactly what's needed to approach with circumspect but open mindedness, this new wave of doing business. And I think both of you agree, all of us agree there will be some level of coexistence with the machine and the human and how that plays out in which use cases is certainly up for debate. So we're going to leave it there. Thanks to both of you. You have tremendously helped. I know the users, I can speak for the listeners and Christy and I that this has been just a delight. Thanks for teaching us and sharing your wisdom, listeners. We'll be right back with the last sip. Well, welcome back to Cantors. We are down to our last sip. One little bit of juice, a bit of sediment left as we finish off and summarize a great journey through the intro and then through that last discussion. Christy, they're just superstars in my view, every time I talk with them, I learn something. And I think what I hope all of you saw was a glimpse into their passion for learning themselves. That's what makes them great. And I think we got just a small picture of that in that conversation. But what did you learn? Anything change? Any big takeaways or revelations from our conversation with them?
Dave Moorhead
So I think a couple of things. The first big one came from Dave, and that is that any implementation of machine learning or any tech under the umbrella of AI is going to be pretty specific to your unique organization or the unique needs of your organization. So if you have a giant portfolio with hundreds and hundreds of GPS and you need some sort of automation to help with that, I think it's useful. If you are consolidating your portfolio into best ideas and it is a much narrower number of line items in your portfolio, then maybe, you know, having that system in place doesn't really make as much sense. I will say on Martine's side, while I still have hesitation in terms of just AI flat out predicting, I do appreciate that implementing a system that both tracks your investments that you went into and then starts tracking the investments that you went into and didn't and then iterates on itself, I think could actually be a really useful tool, even on the allocator side, to the extent that the capabilities become cheap enough to provide that insight. Because I do think it provides a level of intellectual honesty, assuming no data leakage, of course, but it creates an intellectual honesty in terms of being able to figure out how your decisions actually impacted the portfolio. Because right now that's really hard to do. I'd love to hear your thoughts too.
Kristy Townsend
I think your point from Dave is so good, it's just a reminder, maybe a big asterisk on much of what I talked about in the use cases that look, this is going to be different for everyone. And maybe further developing your point, if you're a traditional small endowment, meaning few people and mostly outsourced external managers, some of that unstructured data stuff just is unnecessary because you're relying on your managers to do that, as he was mentioning, that are much more resourced and probably have a lot more computing power that you can expect and is part of that relationship is that that's part of their due diligence that they do for you. Similarly, I think in the way that Baylor has this velocity of human engagement, in the way that they think about leaving a lot of cash dry powder on the side being very opportunistic, having this very Collaborative, ongoing, always on generalist approach to allocating capital dynamically, it's just different. And that's just Baylor as an example. But I think everybody has their own fingerprint or snowflake, use your metaphor, that just because these use cases sound great doesn't mean they're a good fit for everybody. So I think that was just a really good reminder. But I think more broadly speaking, I think they all agree, even Dave, who's just starting this journey, that intellectually the coexistence of these tools plus the human is going to make both better. The human's not better than the machine, the machine's not better than the human, but together they're better than one or the other. The only other thing that came to me as I heard Dave speaking and was thinking about his passion for and commitment to fostering a really healthy organizational design and culture is what does the impact of the machine at the table, whether it's ADA or whether it's more upstream in the day to day lives of the analysts, what does that mean for organizational design, for culture and morale, for even things like performance evaluation? Are you rating the machine? Are you reading the humans use of the machine? You're rating the human's choice of the machine. So I think it just begs a few questions that we've never had to consider because there's other inputs that have quite literally a mind of their own that are now influencing the performance of the human and the team and therefore the corpus of assets. And I just think that's a really interesting but very challenging HR and organizational design dilemma that's going to have to be addressed in the next decade.
Dave Moorhead
I completely agree. And I will add, it's funny because you use the term with the mind of its own, but we know from the structure that the way that it's currently iterated, it's just predicting the next word. But even beyond that, so that small point, I will say 1000% agree with you in terms of the impact, because I think one of the biggest things that I worry about too is just the training of a new generation. And granted the new generation always takes on new tech. I mean, back in the day there was no email, now it's a part of our lives. Back in the day you had lotus weight, 1, 2, 3 or spreadsheets, and now Excel is this foundational tool on which pretty much every piece of finance or the whole entire financial industry is based off of. But within that, having had to build those spreadsheets, I know where the skeletons are. I know how rates can impact them. I have been the person who gets to make those decisions. And I think to the extent that we ever turn over those decision making that set up to the artificial intelligence or any tool that we then consider software, I think could be a danger to how future generations learn the craft. But then again, the industry may change completely. Maybe we'll learn new insights.
Kristy Townsend
I'm certain we will just on that last phrase. But I think you bring up a really profound point, which is the vulnerabilities in the Achilles heels of models and machines are just harder to discover than the rubric in which we use to evaluate humans or more specifically investment professionals or analysts. I mean, we just don't have a pattern or a lens by which to do it. And it's just less tangible. It's harder to get inside of ones and zeros in a digital world and codes than it is the human mind, even with all of its foibles and faults. So lots of big existential questions there that I'm sure are beyond the scope of this conversation. So, Christy, we've got a fun question that is somewhat timely based upon the fact that this will drop just before us Thanksgiving. So I thought I would ask your favorite Thanksgiving dish.
Dave Moorhead
I love green beans, but not fresh green beans. So my grandmother would make them. And I think a lot of families do this, but just the way that because I'm the person in the family now who makes them. But it's a piece of bacon for every can of green beans on the stovetop for 12 hours. Or you could do it in a pressure cooker in like two and they turn into basically a level above mushroom. It's green beans and bacon just fused together. And I love it. I do not eat it for the rest of the year because I want to look forward to it. What about you?
Kristy Townsend
So first of all, anybody that suggests that turkey is a good dish, much less the best dish is just wine or still somewhat under the guise of tradition? I'm just kidding. But as a result, the Bowmans abandoned turkey at Thanksgiving a long time ago. It's just dry and gross. I mean, let's just be honest. But that's not the answer to the question. We have all kinds of fun replacements as the main dish, but I think the answer to the question is I make a mean ninja level stuffing myself. Now, I don't do this every year, but this recipe has some sausage in it. It's got hazelnuts in it, which is a little bit more traditional celery and carrots in it. And it is fluffy, not the mush that typical stuffing comes with. So, as you can tell, I'm very proud of this capability. It's one of the very few cooking assets that I bring to the table. No pun intended. See what I did there? And my versatility at the stove is really weak, but this is one that I think I've mastered. So there you go. So join me for my stuffing.
Dave Moorhead
I want that recipe now. Hear that, listeners? John is inviting all of you over.
Kristy Townsend
For Thanksgiving Live capital decanted from the table.
Dave Moorhead
Oh, we should do that at some point. Maybe not Thanksgiving, but I think it.
Kristy Townsend
Would get pretty crowded, given the listenership these days, which we're thankful for.
Dave Moorhead
Very thankful for.
Kristy Townsend
Well, Christy, that was outstanding. Thanks for all your prep, your great research, of course, your contributions, and thanks to all of you listeners for hanging in there with us. As I say, every single episode, I hope you learned as much, if not more, as we did. It's been fun to walk through this with you, so hope you're well. Take care, and we'll see you next time on Capital Decant.
Capital Decanted: Episode 3 Summary – "Will Artificial Intelligence Replace the Investment Professional"
Release Date: November 26, 2024
Hosts: John Bowman & Kristy Townsend
Guests: Martin Escobari (President, General Atlantic) & Dave Moorhead (CIO, Baylor University)
Sponsor: Alternatives by Franklin Templeton
In Season 2, Episode 3 of Capital Decanted, hosts John Bowman and Kristy Townsend delve deep into the burgeoning role of Artificial Intelligence (AI) in the investment landscape. Titled "Will Artificial Intelligence Replace the Investment Professional," the episode navigates the complexities, potentials, and challenges AI presents to traditional asset management. By engaging with industry leaders Martin Escobari and Dave Moorhead, the discussion balances skepticism with optimism, aiming to unravel whether AI is poised to render investment professionals obsolete or merely augment their capabilities.
John Bowman opens the episode by highlighting the meteoric rise of AI, noting Nvidia's valuation at approximately $3.5 trillion and the colossal investment by tech giants in AI research and development. He references Carlotta Perez’s "frenzy phase" of technology adoption, suggesting that AI is currently riding the peak of exaggerated expectations.
John Bowman [01:30]: "AI processing power provides organizations with massive advantages. Machines process at roughly 2000 times the speed of humans... but it has a dark underside too."
Bowman emphasizes Amara’s Law, which cautions against overestimating short-term AI impacts while underestimating its long-term potential. He outlines the episode’s structure, focusing on AI's specific applications within investment management and questioning whether AI has reached a tipping point to disrupt traditional asset allocation.
Kristy Townsend provides a concise history of AI’s integration into investment management, tracing its roots back to quantitative hedge funds in the 2010s. She discusses how firms like Renaissance Technology and Bridgewater have utilized AI for market analysis and trading patterns, evolving into more sophisticated models incorporating machine learning and natural language processing (NLP).
Kristy Townsend [05:15]: "The history of AI and investment management has a much more abbreviated life... Machine learning has just enhanced efficiency, its processing power, its speed of said models."
Bowman draws parallels to previous technology hype cycles, suggesting that while AI is maturing, it remains entangled in sensationalist narratives. He references examples like Sim O Matics, a Cold War-era AI tool used for political manipulation, illustrating AI’s longstanding and multifaceted influence.
John Bowman categorizes AI applications in investment management into three progressive areas:
Efficiency: Automating mundane tasks such as scraping earnings calls, transcribing reports, and drafting investment memos using NLP and voice recognition.
John Bowman [12:45]: "Think of it as a machine that is your co-pilot that augments existing capabilities and exponentially improves efficiency."
A notable case includes American Century’s sentiment model analyzing management language to flag potential red flags in earnings calls.
Unstructured Data Insights: Leveraging AI to process and derive insights from non-traditional data sources like social media, satellite imagery, and credit card purchases.
Kristy Townsend [14:30]: "Imagine using shipping container tracking and credit card purchases to paint a much more accurate picture of a business’s velocity."
Schroeder's utilization of NLP to identify companies benefiting from the "smart cities" trend exemplifies this category.
Recommendation: AI autonomously makes investment decisions based on vast datasets and historical decision-making patterns.
John Bowman [20:00]: "Machines considering a mosaic of information and making independent decisions... turning AI into the captain of the investment process."
An early example is General Atlantic’s AI "Ada," which participates in investment committee decisions by analyzing 20 years of investment data.
Martin Escobari (General Atlantic):
Martin Escobari shares General Atlantic’s pioneering work integrating AI into their investment committee, highlighting the development of "Ada," an AI trained on 43 years of investment data and decision-making records. Ada functions as a co-pilot, providing fact-based assessments and influencing committee votes.
Martin Escobari [33:50]: "ADA is more negative than us, saying no more often because she’s 100% fact-based. She helps us think slightly different and likely gets better with time."
Escobari emphasizes the collaborative relationship between AI and human analysts, asserting that AI enhances rather than replaces human judgment. He discusses the phased development of Ada, from basic data analysis to more interactive versions capable of challenging human decisions.
Dave Moorhead (Baylor University):
Dave Moorhead echoes a cautious yet optimistic view on AI’s role in investment management. He outlines Baylor’s top-down investment approach, which currently does not heavily integrate AI due to their discretionary strategies and focus on liquidity. However, Moorhead acknowledges AI’s potential in operational efficiencies, such as automating due diligence and enhancing data analysis.
Dave Moorhead [58:00]: "Any implementation of machine learning is pretty specific to your unique organization or the unique needs of your organization... It’s not a one-size-fits-all solution."
Moorhead raises concerns about AI reliability, citing issues like data leakage and model hallucinations that can undermine decision-making processes. He stresses the importance of human oversight and the limitations of current AI models in fully autonomous roles.
Note: The sponsor message from Alternatives by Franklin Templeton is briefly acknowledged but omitted from the summary as per instructions.
John Bowman [25:00]: "We see growing self-reported and what you might call conviction acceleration that seems to support a view that we may be near that broad-based inflection point."
Bowman discusses survey data indicating increasing AI adoption among asset managers, from 10% in 2019 to 91% in 2024. He contemplates whether AI will remain an efficiency tool or evolve into a transformative platform reshaping capital allocation fundamentally.
Martin Escobari [55:30]: "This is my third hype cycle in my adult life... AI is the most transformative because it will touch all corners of the world, all industries, and change most business activities in profound ways."
Escobari compares AI’s current cycle to previous technological revolutions like the Internet and cloud computing, emphasizing AI's unparalleled speed and capital intensity. He predicts significant consolidation in the GP landscape due to rising minimum efficient scales, driven by the high costs and technological demands of AI infrastructure.
Dave Moorhead [72:50]: "Implementing a system that tracks investments and iterates on itself could be a really useful tool... but we remain cautious about AI fully replacing human judgment."
Moorhead supports the notion that AI should augment rather than replace human decision-making, highlighting Baylor’s strategic focus on AI’s role in enhancing analytical processes without undermining discretionary control.
Martin Escobari [82:40]: "Regulation needs to be global. We're facing risks with deepfakes, AI breaking into security systems, and the ethical use of AI in content creation."
Escobari underscores the urgent need for global regulatory frameworks to address AI’s ethical challenges, including data privacy, misinformation, and the potential for AI-driven manipulations. He draws parallels to the newspaper and music industries, advocating for equitable sharing of AI’s economic benefits.
Kristy Townsend [85:00]: "The intersection of human and machine work raises questions about organizational design, culture, and performance evaluation."
Townsend highlights the organizational dilemmas posed by AI integration, such as redefining roles, maintaining morale, and establishing new performance metrics that account for AI’s influence.
The episode concludes with reflections on AI’s nuanced role in investment management. Hosts Bowman and Townsend, alongside their guests, agree that while AI holds transformative potential, its integration must be approached thoughtfully. The consensus emphasizes AI as a powerful tool that, when combined with human expertise, can enhance decision-making, efficiency, and strategic insights without supplanting the invaluable human elements of judgment and creativity.
Key Takeaways:
AI as Augmentation, Not Replacement: AI can significantly enhance investment professionals' capabilities by automating routine tasks, providing deep data insights, and supporting decision-making processes.
Collaborative Relationships: Effective AI integration involves a symbiotic relationship where AI tools like Ada assist but do not replace human analysts and decision-makers.
Ethical and Regulatory Imperatives: The rapid advancement of AI necessitates robust global regulatory frameworks to safeguard data privacy, prevent misinformation, and ensure ethical usage.
Customization and Specificity: AI implementations must be tailored to the unique needs and structures of organizations, recognizing that a one-size-fits-all approach is ineffective.
Future Outlook: While AI is set to revolutionize asset management, the extent and nature of its impact will depend on technological advancements, regulatory developments, and the ability of firms to adapt strategically.
Notable Quotes:
John Bowman [01:30]: "AI processing power provides organizations with massive advantages... but it has a dark underside too."
Martin Escobari [33:50]: "ADA is more negative than us, saying no more often because she’s 100% fact-based."
Dave Moorhead [58:00]: "Any implementation of machine learning is pretty specific to your unique organization... It’s not a one-size-fits-all solution."
Kristy Townsend [85:00]: "The intersection of human and machine work raises questions about organizational design, culture, and performance evaluation."
Capital Decanted continues to explore the intricate balance between human expertise and technological innovation, providing listeners with in-depth analyses and forward-thinking perspectives essential for navigating the evolving landscape of capital allocation.