
Loading summary
A
The whole of global equity markets, excluding technology, has seen effectively no growth in profits since around 2007.
B
William set up Blue Box Investment Management back in 2018, since which time funds under management have grown to over £2.3 billion.
A
In my view, it's the big software platforms, the cloud companies such as Adobe, which are perceived as being the biggest losers from AI. But personally, I don't think that's how it's going to work out. At the moment, there are almost no companies that have made money out of the use of generative AI.
B
Can we trust the guardrails that are apparently being put in place to protect users? And was that the reason why Musk and Altman fell out?
A
I don't think we can, because there's a problem. There's a massive tension here between putting guardrails around your system and pushing the envelope on capability. And for all of these businesses, they are determined not to lose the race this time.
B
Last year you stood up at a charity foundation's stock tipping conference and that stock was lam research. And you were asked to explain what was your best tip over the next 12 months. And many congratulations because the stock was up 100% over the next 12 months. If you at that same conference last week, what would have been your stock tip for the next 12 months?
A
I think it probably would be.
B
Welcome to Algiers Investment Podcast. And in this episode, my guest is William Degale. William's been on the show once before and this is a very, very fascinating time to be revisiting the technology space. And just to give a quick recap, William set up Blue Box Investment management back in 2018, since which time I think funds under management have grown to over 2.3 billion pounds. And those investors who are lucky enough to have got in at the start would now be up very nearly 300%. So, William, thank you so much for joining me today and giving us the chance to get a. A catch up with what's been going on in the technology space.
A
Pleasure.
B
So, William, my first question on the technology front is very simple. What was it that inspired you, as an ex military man, to take off your jewel boots and then end up becoming a technology titan in the foreign management space?
A
Yes. Well, for me it would have been drab boots, of course, because I was riding horses at the end of my army career, so. So I was looking to go into industry, but industry didn't seem particularly interested at the time. And I was shooting with another ex lifeguard officer many, many years senior to me, and he asked me what I was thinking of doing. And I said, well, I'm not quite sure yet, it doesn't seem to work so far. He said, well, what was the most interesting thing you did in the army? And I said, well, it was to serve as an intelligence officer. And why was that interesting? He said, and I said, well, because we had watchtowers and patrols, we had a reasonable idea, we felt of who the other side were. And we got up to a thousand reports a day, sightings of individuals or vehicles or people associated with them, whatever. And then at the end of the day, my job was to go through all that data that had been reported to my intelligence cell and spend normally the night working out what was going on. Spot the patterns, predict the future. The chap said, well, in which case, I think you should be a fund manager, because that's pretty much what I do. I spot the patterns and I predict the future. And he's absolutely right. It's a very similar job but the pay is better and no one's trying to kill me. So it's definitely an improvement. Two kinds.
B
That's definitely a win win, isn't it?
A
Yes, definitely.
B
Moving on to a very, very important question because there's a lot of confusion here for some people. What is the difference between Google and ChatGPT?
A
So well, Google is part of Alphabet and it's a search engine primarily and you would put in some sort of search terms and Google would have a good idea what was out there on the web and would draw your attention to a number of website featured those words in a way that had interested previous people who'd searched something like you had and would list them in order with a few paid places at the top. And the rest of it would be sort of natural search. So it's basically looking at its knowledge of the web and is then sending you to the most likely destinations and it rewards itself, as it were if it's successful in that. Whereas ChatGPT is a generative AI system created by OpenAI, one of many generative AI systems, and here the system has essentially been trained by pretty much everything that's ever been written down and recorded and is very, very good at predicting what the next block of characters, words, whatever, would be after the one it's just put in place. So it looks at your question and it thinks, well, if I'm being faced with that sort of block of characters, statistically the best answer is probably going to be one of these. It won't necessarily give the same answer every time, but it'll give one that it feels is likely to be satisfying and that block can be very long. And so it's not searching the Internet, it's basically looking statistically at all the information that's ever gone through its trading data and just looking for probable successes. You then don't have to go to the web to look stuff up because it's sitting there in a search in the, in the generative AI box. So with Google Search now you have the traditional search results below a box which is Google's own form of own AI at the top. And it will give you one of these sort of integrated summaries of what might be out there. And again that will often satisfy you. So you don't need to go to the web. So it's hopefully the intention is to satisfy you but without you having to go off and do all the trailing through different links and websites and stuff in some that's one way of doing.
B
Fascinating, fascinating. Who owns ChatGPT?
A
So ChatGPT is created by OpenAI which in turn is owned now and after reorganization by I think it's the OpenAI foundation, which is a charitable, not for profit that owns a chunk of it. Microsoft owns I think 27% of it. And there are various other investors, private equity, venture capital, big corporates that own business as well. So OpenAI is the entity concern, but it has various owners itself.
B
And do you have a favorite amongst the generative AI systems out there?
A
Well, the one that I would use is the very simple, generally the one at the top of Google Search and occasionally I will go to ChatGPT. But to be honest, I'm not a huge fan of artificial intelligence for my own use. In general it's interesting, it's causing lots of expenses, benefits, making my stocks go up. I'm not overly convinced that in many applications it's particularly good for the world. I'm not a huge fan in many ways, but I'm very old fashioned in that respect.
B
No, I think there's a lot of young people who also agree with you. How much energy does generative AI use in comparison to say Google search?
A
So it's far more energy intensive. I've heard the number 10 times and it depends what you're doing. If you were doing some sort of video generative AI, then it's massively more and more. This slightly turns the economics of AI upside down because sort of the original Internet in the 90s, early 2000s and then Web2, sort of 2010 onwards, they were based largely on a being paid per view, generally through advertising so you had some sort of fixed cost, and then every time you got some sort of page view or usage, you got incremental revenue and incremental profit because the cost of delivery was very, very low. So that rewarded growing very, very fast indeed, because that grew you towards profitability and then into profitability. The economics are not necessarily the same. Now, for much of AI, the cost is very high and the revenue is quite often fixed or considerably less than the cost, which means that growing as fast as possible can just be a quicker route to bankruptcy and running out of cash. So there is a danger of applying old business models or sort of blitzscaling to an environment where the marginal profit is negative from additional use, at least as the economics are currently set up.
B
That's quite scary.
A
Yes.
B
So what is a gpu and what's it powered by? And why are they built in clusters of hundreds or thousands?
A
Yes. So a GPU is a graphics processing unit, and that's really in contrast to an mpu, which would be a microprocessing unit. So the old computing sort of paradigm was intel and the like, using essentially sequential processors, which would do one calculation, followed by another, followed by another, and do them very, very fast indeed. But it would only do one or four or possibly eight simultaneously. A graphics processor was designed to change all the pixels on your screen as fast as possible and as smoothly as possible, and pretty much simultaneously. So it does thousands of very, very small calculations simultaneously. And it was realized probably 15 years ago that actually these would be quite good for doing, for processing vast amounts of data. So sort of supercomputing, because they just get through the data thousands of times faster, what they're doing is not necessarily particularly complicated in each of those transactions, but there's doing thousands of parallel streams, and that's proven to be correct. And AI is basically run now largely on graphics processors, with the microprocessor still sort of controlling the overall system. But this graphics process is doing the vast majority of the work. And that parallelism can continue from not just one chip, but to multiple chips. You have clusters, you have sort of mega clusters. You have thousands and thousands of these things linked together. And there's lots of techniques to allow them to speak rapidly with each other, transfer data, get data in and out of common storage, and so on. And that's really how the envelope is being pushed within AI computing, that you get more and more chips and memory to work together very, very efficiently, and you just get through the data much, much faster is the answer.
B
And where are these massive warehouses where they've been built, are they above ground, below ground?
A
Well, they're very big, so they wouldn't be able to put them below ground. So the biggest data, so it's sort of being measured by the energy capacity now. So one gigawatt capacity is an absolutely vast amount of power. And so a sort of standard 1 GW sized AI processing plant data center, I'm told would be some. I think the number was something like 600 football fields. It's an enormous area. So if you look hundred, if you look from the sky, it's just an almost endless box with lots of little boxes sticking off it. It's a vast, vast area. So you couldn't put it underground because that would be a heck of a hole. So these things are sprouting up all over the place. It's useful to put them somewhere where energy is relatively cheap because once you build them, the biggest expense is the energy you use to run the GPUs, which are extremely energy intensive and then to cool them down. Because all the energy you're putting in for processing, it doesn't just disappear, it comes out as heat. So you're putting in a gigawatt of power. You know, that's tens of thousands, hundreds of thousands of homes effectively, and that comes out as heat. So then you have to extract that heat and then hopefully do something useful with it. But I think in most cases they're just extracting it and sticking it back into the atmosphere. And the extraction uses a lot of power as well. So you need to be somewhere where power isn't too expensive. But it would be nice if you were somewhere near where people are actually using the capacity because the further away you are, the more sort of bandwidth and piping you have to stick to get it somewhere useful. So there's a compromise there. And also where can you get permitting and subsidies to run this stuff? And are you allowed to ship the components to this location? The most sophisticated chips can no longer be sold by American or Western companies to China. So there's no point in building a new AI data center if you Google in China, because it'll be empty, there won't be anything in. So these are all factors that you'd have to bear in mind when you were choosing where to place your data center. But energy is probably the biggest constraint. And network sort of grids globally are not well set up currently to position huge one or multiple gigawatt facilities from scratch in a given location. They're not wired into the grid yet. And the grid may not be able to cope with that amount of power requirement all at once. So in fact it's grid connections which is one of the biggest constraints on AI growth currently.
B
Quite answer my question because where are these massive great warehouses? I mean, are they in Greenland, are they in North America?
A
So I think the original ones are probably places like Virginia, which is where lots of the big original data centers were, but I suspect they've now moved out into a more remote locations in the us, probably Canada. Greenland might be a little bit too far, but there's probably one somewhere. If you got hydropower then that would be a good place to put it. So I suspect places like Norway and stuff would be useful because the power that'd be effectively free if you're at the location. Yeah. So I suspect there's a general migration from population centers and traditional computing centers towards more energy rich locations.
B
That's really fascinating. And then how much money, I've always wanted to know how much money is going to be invested in AI globally.
A
That probably depends who you're asking. So I think Sam Altman, who runs OpenAI is talking, Last time I heard about $1.4 trillion of investment, which is a truly staggering amount of money. And personally I don't think that's very likely from OpenAI anyway. And what I'm saying may become very rapidly dated, but it seems to me that that's probably pushing it a bit. But we're already talking hundreds of billions of dollars annually of investment and that number still seems to be going up.
B
And can we trust the guardrails that are apparently being put in place to protect users? And was that the reason why Musk and Altman fell out?
A
So I don't think we can, because there's a problem. There's a massive tension here between putting guardrails around your system and pushing the envelope on capability. And for all of these businesses, they are determined not to lose the race to really to artificial general intelligence, where the robot brain can take on a new task and work it out without training. So what it knows in one situation becomes immediately useful in others. And that suddenly would massively increase the capabilities of the system. And none of them want to let one of the others get there first. If you start constraining what they can do with their systems, then that's going to constrain their growth rate. And you can see this on a government scale as well. So Europe took a fairly cautious view early on towards AI and was building a set of rules that seemed probably quite sensible. If all else Considered to try and keep stuff under control and stop it becoming too dangerous and sort of tearing the fabric of society. And the result of that is that Europe is now seen as a loser, that it's basically regulated itself out of the race for AI. And so maybe it's now loosening up on those rules a bit. There's no real incentive to produce safe AI because you'll be streets behind everybody else, which is tricky from the point of view of humanity as a whole.
B
And then who are the big spenders?
A
So the biggest spenders have been largely big, publicly listed, extremely cash rich companies. So Microsoft Alphabet Meta, because these businesses have in many cases, or they do have the computing knowledge and capabilities, they have the data centers to start working with, they have huge free cash flow, a big portion of which has been reinvested recently in developing AI and building out AI capacity and capabilities. But over time they've been joined with and before them, of course it was mainly private companies such as OpenAI. But the real acceleration in spend was the public ones coming. And then there are businesses which are not as catch rich. So Oracle Core Weave in the public markets would be two examples of businesses which were essentially borrowing in order to make these investments. And that's a scarier proposition because if Microsoft makes a mistake with its investment, well, it just writes off half a trillion dollars. It's not good. But you know, it's not the end of Microsoft if Oracle or Corweave in particular makes a mistake and wastes capital. And then that could be the end of the business because they've leveraged themselves up on the assumption that that investment works out. So that gets a little bit scarier and riskier and then there's a considerable amount of circularity in the industry. So Nvidia is the clear big winner of the build out. It's the bottleneck on chip production. Um, and it's clearly over earning as a result of that positioning. It's generating huge amounts of money, huge amounts of cash and it has to do something with that. You know, it could give it back to shareholders. When it does that, everyone seems to complain it's not going to pay a massive dividend probably either. So, so what's it going to do? It starts investing it in all sorts of other businesses, but also it effectively invests it in its customers. So it could just take a stake in OpenAI on the understanding that OpenAI will then spend that money and a bit more buying, adding the capacity which would involve lots of Nvidia chips. So there's A certain circularity. It's not just Nvidia, it's various companies in this space. And that circularity is slightly worrying people at the moment. It's always there to a degree in the tech sector and it can be healthy, it can give a sort of effectively bootstrap a new business or a rapidly growing business upwards. But the degree of circularity and the complexity of the sort of web that's been created, of ownerships and dependencies is getting quite very complicated. And that in itself is just beginning to worry people, I think.
B
And what is the ultimate prize? Is it worth winning? And who do you think is the most likely to be there or thereabouts?
A
What they appear to be aiming for is artificial general intelligence. So this transferability of capability from one realm to another by the machine. So it doesn't need retraining for a new task. It has an understanding of, of the world essentially and it can apply that to the next task. I'm not an engineer, I'm certainly not expert in this area. It doesn't seem to me we're anywhere near that I might be proven wrong. But it seems to me that that from on the route that it's been taken at the moment, they just won't achieve that. But that's what companies appear to be aiming for and there's a view that it's winner takes all. So the first one that gets there is going to take almost everything. And so it's a disaster not to be the first one there. So the big companies are all investing to keep themselves in the game, to have a realistic possibility that they might be the one who gets there first, if any do. There's a certain prison's dilemma element to that. So I think they're investing to keep ahead of each other. If they could, they'd all slow down a bit, but they can't because if anyone does then the other ones will overtake them in the spend and therefore in theory get closer to this illusory in my view, prize.
B
So there is a risk that we are heading towards a build out bubble.
A
I think there is a risk, but I don't think we're there yet. So there's lots of conversations about bubbles and have been for quite a few months now. So for example, we did our regular quarterly call for the Blue Box Tech Fund a few weeks ago. I think we got nine questions in advance and sort of eight and a half of them were essentially, is there an AI bubble currently? And you know, if everybody's worried about an AI bubble, then I don't think we're there yet because that's not what a real bubble feels like. A bubble in my experience, you know, I started covering the sector back in the late 90s. So as the bubble was just beginning to inflate for the first of Internet build out, society goes all in. Everybody wants to be part of it. No, very few people question it anymore. Everyone just assumes it's going to go on forever. That's not what it feels like at the moment. And if you look at the overall growth of technology spend and the overall growth of technology sector, to me they seem still to be within rational constraints. It's very rapid growth, more than 15% a year for the last 16, 17 years. But we're within the regular bounds of that. It's just within tech. The spend is all. A lot of it's going towards AI, but there's a lot of technology that's not AI. So I think if you look at technology as a whole, I'm not too worried that we're in a bubble. I think we're within the bounds of rational individual decisions. Almost certainly there'll be some polling decisions that have been made and those could result in the downfall of individual businesses. But it doesn't seem to me to be systemic. But this is very much a statement. In November 2025, in two years time, what I'm just saying will look stupid in one way or the other. Either it was clearly a bubble or why do you even think it was a bubble? But at the moment I think we're sort of on the cusp. At the beginning of the month it looked as though we might be heading into a manic phase and the market stepped back. We're all down, we're all taking a bit of a breather. The market's gone back down a little bit, it's moved back into the bounds of the rational. So I don't think we're in a bubble and I think the market is very aware of the bubble risk. Individual companies will make appalling mistakes and that's just, that's capitalism, you know, that's just what happens.
B
And actually, whilst you talk about risk, company risk, of course there's a human risk here as well. And my question really is all of this ability to generate information at the touch of a button, surely that devalues the precious component in human society of knowledge and that actually knowledge is being devalued at an individual level.
A
Yes, I think that's possible. And it does appear that people who get used to using Generative AI to. To think for them. Their ability to think for themselves is beginning to degenerate a bit. Some studies seem to show that. I wonder whether it could go the other way, that actually it might in time become easier to stand up. Because if you're the one person in 10 or the one person in five who actually thinks for themselves, thinks originally, doesn't just rely on generative AI to give you the most probable answer, considering everything everybody has ever said before, but that maybe those individuals will find it easier to stand out. So I've been told I should use generative AI to create my monthly commentaries, for example. It would certainly be a lot easier. I press a button, it'd be done. But actually I put a lot of thought into these commentaries and I think they're like nothing else quite in the market at the moment. And it's an interesting experiment for me each month to try and summarize what I think's been going on and talk a little bit about what might happen next. And it stimulates my brain rather than sending it to sleep. So I think actually it may be that it provides a way for humanity to distinguish between the 20% who are extremely lazy, the 20% who are extremely diligent, and everybody else will fall in between the two. But maybe that's an optimistic way of looking at the world. But I am an optimist, so, yeah,
B
I think if I'm managing, you have to be an optimist.
A
Yes.
B
What happens after AI? Is there an AI too?
A
Well, I think they're hoping for AGI, so Artificial General Intelligence. So that would be AI too, and that would move everything onto a completely different level. But as I said before, I'm not an expert, I'm certainly not an engineer, but from my perspective, I don't think there's absolutely no sign of us getting there at all. On the current trajectory, which tech companies
B
will win the next round of disruptive innovation?
A
Oh, I don't know. So I'm less interested in the disruptors, to be honest. So the companies that are doing exciting and adventurous things, turning the world upside down by making use of technology in a new way, they're very exciting, but you don't make money out of them in the long run. In my experience as a shareholder, yeah. So as a public shareholder, anyway, I mean, they're brilliant, they change the world, all sorts of fantastic things, but they're companies that spend money, vast amounts of money, and I'd rather own, as an investor, the companies that are selling them stuff and are making the profits. And I don't really care so much what the next disruption is. So there will be something, there will be something that comes after AI, generative AI, and it will drive spend. My companies will sell stuff to it. And the nice thing is each of these rounds of disruption essentially rely on the same underlying technologies. So if you own the companies that provide those technologies, they'll be receiving the dollars for whatever comes next. So I don't need to work out what comes next. That's one of the advantages of our approach.
B
And what is the average long term return for the average technology company over recent decades?
A
Yes. So if you go since 2009, then the equal weighted S and P technology index, which is effectively the average US large cap tech company, has been on an incredibly steady 15.2% growth trajectory. So it's been growing at stock price by a very reliable 15% a year. And that's almost the same as the growth of profits for US tech companies over the same period, which is around 14% over the same time. If you look further back to the mid-80s, it's been a little bit more than 10% from then to now, but this was an acceleration that sort of I see as happening somewhere in the middle of the first decade of the new millennium or maybe just after 2010, but I see it as just before 2010. But charts are interpreted in different ways and that's an acceleration from that point up to the sort of 15% level.
B
Moving on now to chip manufacturers, who is the most advanced chip manufacturing capability?
A
Undoubtedly tsmc. So Taiwan Semiconductor is way ahead of everybody else at the moment.
B
And who has world's most advanced software capabilities?
A
I think one would have to be more general and you'd say it's either the US or China, but it's not an individual company. So I think you could probably measure them in all sorts of different ways. And how you'd measure them today might turn out to be completely wrong in six months or a year's time. So, you know, it often depends upon where the company's positioned is how much of the capability you see.
B
Right.
A
An example would be Alphabet Google, which in my view has been one of the leaders, if not the leader in generative AI all the way through. But it didn't want to play its full hand because it had too much at risk. Whereas OpenAI, and Microsoft is a sponsor of OpenAI, didn't have a position in the search market. They had a lot less to lose. So they would appear to be pushing the envelope massively. And Google chasing but actually in reality I think Google's probably been just as good all the way along. So it may well be that we don't get a full impression of who the leader is today. And the Chinese companies are clearly very, very good at this as but they are still handicapped by lack of access to the most sophisticated silicon. So they have advantages probably in terms of cheap energy and a more versatile grid and support from on high dictating that they can get all the power they need. But at the same time they have been unable to buy advanced processors, in particular graphics processors or the equipment to make them for I suppose about seven years now. So they are now seven plus years behind in terms of technology, of semiconductor technology.
B
And is Nvidia, is that a software company or a hardware company?
A
Well it's an intellectual property company really, isn't it? So Nvidia sells chips. So strictly it's a semiconductor company which is a sort of used to be part of hardware. Now it's generally considered to be a separate industry group but it doesn't actually make anything. So Nvidia doesn't physically make anything at all. It pays Taiwan semi, the world's biggest foundry to make the chips for it. So it provides the designs essentially intellectual property for TSMC to make the chips for it and then they will be finished off elsewhere and then they probably don't even go back to Nvidia. They then go direct to the event end user. So essentially Nvidia is using, it's selling intellectual property through the medium of chips built for it largely by tsmc.
B
Is Nvidia's ecosystem as robust as Apple's?
A
So it's a very, it does have a clear ecosystem which is extremely powerful, but in a different way. So Apple's ecosystem is serving the users of its products primarily consumers. So you buy an Apple phone, it is built for Apple, it's running, it's basically essentially Apple software and software from other people that's approved by Apple. It's using chips that are designed by Apple, built again by TSMC generally. So it's virtually integrated to a degree and it controls your experience to a degree within the Apple ecosystem, whether it's Macs or phones or whatever in that particularly on the phones, it's heavily vetting what you can do with that phone. It's also connecting those devices with each other, both your own devices. Your phone speaks to your computer but also your phone to other people's iPhones. Because for example imessage works very well. So people like sending imessages to each other. It's very convenient. It was particularly convenient a few years ago when it was not as clear an alternative. So there's lots of reasons why you would stay within the Apple ecosystem as a consumer. Nvidia has created an ecosystem for the training of generative AI models. So that not only are the Nvidia chips, graphics processors, its chips, the most sophisticated general purpose AI accelerators, but also the chances are unless you're one of the very biggest companies, you'll be using the CUDA platform to create that AI model. And the CUDA was created by Nvidia many years ago and it's extremely good and it works almost entirely with Nvidia chips. So it's very difficult to use it, to my understanding with any chips that are not from Nvidia. So that if you're using cuda, you're locked into Nvidia chips. So that is an ecosystem for the creator of the large language model who wants to use CUDA to do that. It's lock in. That's very much on the training side on inference. So once you train the model, then using it, deploying it is inference and now there's a much less clear lock in. So that's not cuda's field but, but Nvidia has created various platforms which people can use for inference, but it hasn't really managed to lock people into that. So there's many more alternatives on the inference side, which in the end presumably will be far bigger than training. So Nvidia's ecosystem is extremely powerful at the moment, but it's not as per or it's not as probably as long term as the Apple one. And it's facing businesses rather than consumers
B
who are the outstanding picks and shovel businesses in the technology sector today.
A
So I would see the key ones as being Taiwan Semiconductor because it is an almost perfect bottleneck on the production of sophisticated semiconductors. But a much broader group, but almost as important are the companies that make the equipment to make the chips. So these are the semiconductor capital equipment companies which would include asml, LAM Research, Applied Material, Tokyo Electron, ASM International, kla, Tencorp or KLA Corporation and various others. So these companies make the machines that make the chips and they are absolutely sort of the foundation of the entire technology industry. So they make the picks and then someone else sells the picks and then someone else uses the picks, as it were. They are the pick press of the industry.
B
Part two of my conversation with William De Gael will be published on Friday 16 January. Thank you very much for listening all content on the Algies Investment Podcast is for your general information and use only and is not intended to address your particular requirements. In particular, the content does not constitute any form of advice, recommendation, representation, endorsement, or arrangement and is not intended to be relied upon by users in making or refraining from making any specific investment or other decisions. Guests and presenters may have positions in any of the investments discussed.
Date: January 12, 2026
Theme: Expert insight into technology investing, the impact of AI, industry power dynamics, and William de Gale’s approach to picking long-term tech winners.
In this engaging discussion, veteran investor William de Gale reflects with host Algy Smith-Maxwell on the explosive growth and shifting landscape of technology investing. The conversation unpacks generative AI’s impact, how tech platforms are adapting to seismic changes, the economics and energy costs of AI, and which fundamental “picks and shovels” companies might endure beyond the current AI boom. Highlights include de Gale’s rationale for focusing on the suppliers behind tech revolutions, his skepticism on the reality of an AI bubble, and a sharp detour into data-center geopolitics, grid bottlenecks, and the enduring value of original thinking.
On the similarity between intelligence and investment:
“I spot the patterns and I predict the future. And he's absolutely right. It's a very similar job but the pay is better and no one's trying to kill me.” — William de Gale [03:10]
On trusting AI guardrails:
“There's a massive tension here between putting guardrails around your system and pushing the envelope on capability. And for all of these businesses, they are determined not to lose the race this time.” — William de Gale [00:41, 14:04]
On AI’s energy economics:
"Growing as fast as possible can just be a quicker route to bankruptcy." — William de Gale [07:23]
On speculative bubbles:
“If everybody's worried about an AI bubble, then I don't think we're there yet because that's not what a real bubble feels like.” — William de Gale [19:35]
On the enduring value of independent thought:
“Maybe those individuals will find it easier to stand out... But I am an optimist, so, yeah.” — William de Gale [22:26]
This episode offers a rare, clear view of tech investing from William de Gale—a pragmatist who looks past the AI hype to the economic realities beneath. By focusing on the “infrastructure” companies that directly profit from each new wave, de Gale shows why being a supplier can often be more lucrative—and less risky—than chasing the next great disruptor.
Stay tuned for Part 2, releasing January 16, 2026.