
Loading summary
Alex
GPT5 may indeed be coming soon and some good reviews are starting to arrive. OpenAI Stargate program is off to a rocky start, AI labs go to war over math, and big tech earnings start to pour in. That's coming up on a Big Technology Podcast Friday Edition, right after this. Welcome to Big Technology Podcast Friday Edition, where we break down the news in our traditional cool headed and nuanced format. We're joined today by a special guest to help us break down the week's news. Stephen Morris is the San Francisco Bureau Chief at the Financial Times and is here to speak with us about everything OpenAI, GPT5, and we might even cover some Tesla earnings. Stephen, it's great to see you again. Welcome to the show.
Stephen Morris
Great to be here, Alex, thank you.
Alex
So let's talk about these rumblings because I don't think it's worth hesitating before we dive right into them. That GPT5, OpenAI's latest, greatest, biggest, most hotly anticipated, most delayed model is seemingly on its way. The Verge says OpenAI prepares to launch GPT5 in August. The story reads earlier this year, Microsoft engineers were preparing server capacity for OpenAI's next generation GPT5 model, arriving as soon as late May. After some additional testing and delays, sources say that OpenAI plans to release open release GPT5 as early as next month. Sam Altman, the CEO of OpenAI, says that they are going to be releasing GPT5 soon and talked about some of its capabilities with the OVAN and said some crazy things about it, which we'll get into in a moment. But first on the timing. Stephen, is it, is it time for us to believe OpenAI and the rumors that this thing is actually going to come? I almost felt silly putting it as the lead story in today's show.
Stephen Morris
No, I think, I think you were the right, if that was the right call. This very much has consumed Silicon Valley and San Francisco recently with speculation about will we actually see it this time. Like you said, we've had so many false starts, so many false reports, but Sam Altman is out there telling everyone it's coming next month. You know, traditionally, as you know, August is a sleepy time for journalism, but definitely not this year. What we're a little thinner on is the details about what exactly this is, what does it look like, and will it be kind of the leap forward that I really do think OpenAI needs it to be? There is a lot going on with this company at the moment. You know, everything from negotiations on restructuring with Microsoft, building these vast global Stargate network of data centers, you know, new devices with Jony, I've. But I think what underpins all of this, the 300 billion valuation, the hype, is the fact that it has one of the best, if not the best models and we really have that necessary. That hasn't necessarily been true for a while. So this is Sam and the rest of the researchers chance to really come out and shine.
Alex
You're saying it hasn't necessarily been true that OpenAI has the best model for a while.
Stephen Morris
That's certainly if you look at benchmarks and if you talk to people around the sector, that's definitely the perception. For a while there wasn't really a legitimate competitor. But you talk to people now, including its most vital stakeholder, Microsoft. They're locked in these negotiations about OpenAI transforming itself to a for profit entity. And these are very tense. But when you speak to people over there, they don't necessarily think that it's clear that OpenAI is out in the lead, that it has the most persuasive, definitive model anymore and you're starting to see it offer other things off its platform like Elon Musk's Grok from xai. And I think if GPT is the slam dunk that certainly Sam Altman is telling everyone it's going to be, that will kind of put, you know, those kind of doubters and those questions to write.
Alex
Yeah, GPT5 being that slam dunk. Yeah, I'm looking at LM arena right now. Number one model. It's actually a tie between Gemini 2.5 Pro and OpenAI's O3. OpenAI takes the next couple stocks with spots with chat GPT4O latest and 4.5 preview. Wonder what that might be. And then Grok and Kimmy K2 are hot on its tail. So there's definitely a lot of competition at the top. But you know, I almost would say that Chachi PT or Open AI has put more pressure on itself than its competitors have. You know, there's been this, the way that this company speaks has been this long anticipation that its next big number release, that the GPT5 release would be something really special, would potentially even be artificial general intelligence. Although I guess like that might be more the hype than the company itself, but just the way that it talks about the way that it's, that these models are gonna work, it's pretty wild. But here's, here's Sam Altman on Theo Vaughan, which my speculation is that they booked this because they thought this would be GPT 5 week and then they just had to like talk about what it might be because they haven't released yet. Here's what Sam Altman says. This morning I was testing our new model. I got emailed a question that I didn't quite understand and I put it in the model GPT5 and it answered it perfectly. And I kind of sat back in my chair. I was just like, oh man, here's the moment. And I got over it quickly. I got busy onto the next thing. But it was like I felt useless relative to the AI. And this thing that I should have been able to do is. And it's really hard, but the AI just did it like that. It was a weird feeling. Almond also said that GPT5 was able to code up a project for him in like five minutes. That would have taken much longer otherwise. Are they overhyping this? Are they putting too much pressure on themselves?
Stephen Morris
I don't think Sam Altman has underhyped anything in his life. So I. About that. But if what people are saying and what the, you know, what they're saying and speculating about is true, what we're going to see is a much larger model that marries all of the, you know, innovations and capabilities from reasoning to deep research, multimodal capacities, reading text, seeing videos and audio and then wrapping that together into a very speedy and, you know, cost effective package. It really could be, you know, a leap forward. I mean, what he has said about the models is that, you know, you shouldn't have to pick which one you use yourself for a variety of different tasks. I mean, the famous question asking it how many Rs there are in Strawberry, you know, you know, models have struggled with this very basic thing, whereas they're extremely good at complicated math problems or coding. So if you ask it that question and the model doesn't accidentally go off and spend five minutes doing a very expensive, token consuming, deep research project, but just knows itself that's a huge time saver for consumers, in particular ones that are less sophisticated that are just using this as a chatbot. But then also, I think what OpenAI has been feeling in its competition with Gemini and Anthropics Claude is that it is perceived to have slipped behind a little bit on coding. That is the most tangible and financially rewarding real world application of these things so far. And I think what we're going to see is OpenAI really strike back in that and say, look, we can compete with Gemini, we can compete with Claude, and your business should buy our enterprise product, not just the consumer Chatbot which has really captured the public imagination, but big companies taking out big hefty multi year contracts, giving it sort of a lot more visibility into the future and its revenue. And that is if the coding aspect lands right with this I think that could be quite transform transformational for its business model.
Alex
That's perfect lead in because the information does have some news on that so they say GPT5 shines encoding tasks this is I think a brand new story here on Friday. The information writes GPT5 is almost here and we're hearing good things. The early reaction from at least one person who's used the unreleased version was extremely positive. I'm just going to pause here. I always, when I read this one person who said it was really great, I'm always like that's Sam Altman. But Anyway, they say GPT5 showed improved performance in a number of domains, including the hard sciences, completing tasks for users on their browsers and creative writing compared to previous generations of models. But the most notable improvement comes in software engineering, an increasingly lucrative application of LLMs. GPT5 is not only better at academic and competitive programming problems, but also at more practical programming tasks that real life engineers might handle, like making changes in large complicated code bases full of old, old code. The nuance has been something that OpenAI's models have struggled with in the past and one and is one reason why rival Anthropic has been able to keep its lead with many app developer customers. But as we've reported, this is still the information OpenAI is more than aware of this issue and has been working in recent months to improve the coding capabilities of its models. I mean, what are the implications if OpenAI is, let's say, able to equal or pull ahead with Anthropic, which we know is the state of the art encoding with this new model?
Stephen Morris
Well, they pursued very different you, you know, subscription revenue models. So far OpenAI has, you know, it's almost the verb like to Google, you know, you chat or you GPT. The question if, especially if you're a young student, whereas Anthropica's Claude doesn't just doesn't have the same brand recognition. But it has relentlessly gone after what they call enterprise customers, like big businesses, offers them access to their technology through APIs and has longer, bigger and more visible contracts. OpenAI has long been jealous of this. It wants in on the game. It's also competing with Microsoft, its own partner, in offering these services through the Azure platform. But increasingly Google and Gemini, which trumps its coding chops so if OpenAI is able to prove that its models are at least as good, if not much better, then it can start to take back some of this. And it really does change the competitive landscape because I think GPT is the undisputed winner of the consumer chatbot wars. So far, what it hasn't proved is that it can make the transition to the business and governmental world in the same way that some of its competitors have. And maybe they were forced to go down that route because OpenAI was just sucking all of the oxygen out of the room on the App Store.
Alex
Exactly. And it's, it's interesting that you mentioned that OpenAI is trying to do this. It's sort of like a frenemy partnership with Microsoft at this point. I mean, of course Microsoft is going after those enterprise use cases. They're probably coming up against Amazon reselling anthropic. Amazon's invested 8 billion in anthropic. And you know, the two companies really have to play together if they're going to get this much better with their next model to really be able to make that enterprise play. And that's, that's really an open question. And there was an interesting story aspect of the Verge story which is does OpenAI declare AGI with this new model? Does it say it's reached like human level intelligence or artificial intelligence, general intelligence, and sort of begin what might be a break from Microsoft? So this is from the Verge story. The declaration of AGI is particularly important to OpenAI because achieving it will force Microsoft to relinquish its rights to OpenAI revenue and its future AI models. Microsoft and OpenAI have been negotiating their partnership recently as OpenAI needs Microsoft's approval to convert part of its business to a for profit company. It's unlikely that GPT5 will meet the AGI threshold that's reportedly been linked to OpenAI's profits. This is according to information. The companies have defined artificial general intelligence as a system generating 100 billion in profits. So let me throw this to you. Do you think they're going to declare AGI with GPT5? And if they do, what happens to that partnership with Microsoft?
Stephen Morris
I hate to make bold predictions, especially in tech, because you can often be spectacularly wrong. But I do not think they're going to say GPT5 is anything approaching AGI, however you choose to define it. There's also a lot of nuance in its relationship with Microsoft. Just for anyone that's not familiar, OpenAI has for a while been trying to restructure its company from sort of a pure nonprofit pursuing AI for the benefit of all humanity, to create sort of a more an arm beneath this entity that can actually raise a lot more money, in particular debt from more traditional investors, which it argues is necessary for it to be able to build these data centers and invest in people and processing power to compete. Microsoft essentially has the keys to unlock that because of its early investment in 2019, which it has then increased tenfold. And one of the key clauses in this agreement is this AGI clause. Once OpenAI hits AGI, whatever that is, Microsoft has essentially shut out the deal with the idea being that you shouldn't hand over the most powerful technology ever known to man to a fraud for profit company like Microsoft because they can't be trusted with it. Back in 2019, this probably sounded like a good idea because who knew when we were going to hit Artificial Journey?
Alex
My, how times have changed.
Stephen Morris
Or Super Intelligence. And now, you know, you have Musk and Altman out there saying that they can feel the AGI on almost a daily basis. However, Microsoft is a big, ugly, competitive tech company that's been around for 50 years and they're not just going to let Sam Altman say, I feel like AGI has been reached. You know, firstly, the board will have to. The board of OpenAI will have to form a subcommittee of which Microsoft will have a say to decide how to define it and whether they've reached it. And secondly, there's a financial aspect this has to be able to generate, I think, and I may be wrong on this, more than 100 billion kind of in revenue a year.
Alex
No profit.
Stephen Morris
Profit, well, profit, exactly. Which, you know, obviously is not really being made by any AI companies at the moment. So there's a lot of. It's not like this. By saying it feels like AGI, Microsoft are immediately excluded. Their lawyers and their chief executive, Satya Nadella are much savvier than that. What I think we are seeing though, is a path to a complete fracture in the relationship between these two companies. You said they were frenemies. The Financial Times has done a lot of reporting on this. I'm not even sure about the FR start to that relationship at the moment. They seem to be an almost outright war briefing against each other and really trying to. Well, they're basically trying to secure the best deal for themselves and their shareholders. Right. But there is an element, as you said, of mutually assured destruction here. OpenAI is Microsoft's only real play in AI at the moment. They've created their own team internally led by Mustafa Suleiman, the co founder of DeepMind, along with Demis Hassabis, who's still at Google, they really haven't had much success building their own models. And a lot of what they're offering customers, their big millions of enterprise customers off their Azure platform is, is their own spin on OpenAI's underlying technology. And OpenAI, for its part, relies almost entirely on Microsoft's Azure cloud computing network to train and run its models. It's a great distribution method and also is one of its biggest financiers entitled to a huge share of its its revenues. So these two are sort of tied together. They both jumped out of a plane and it kind of rem be seen who's the first to blink and pull the parachute and how far down the road they get that. We wrote a story a few weeks ago that Microsoft is willing to just walk away if it doesn't get what it needs from these restructurings. And that includes the language around AGI.
Alex
So it's interesting you bring up Mustafa because I had him on actually we spoke on YouTube and for big Technologies newsletter and he was describing this new medical diagnostic orchestrator that Microsoft had built and he said, look, as the models get commoditized, it's going to be the orchestration that makes the most value. And it's like, oh, okay. So if you think like it sort of indicates a lack of faith in OpenAI to continue to have that lead, if you believe that the models are going to get commoditized. But look, Stephen, I'm going to take the opposite side of, of yours on the AGI question. I think there is a decent chance, I'm not saying it's for sure going to happen. I think there's a decent chance that they are going to say it's, it's AGI, that GPT5 is AGI. And, and by they I mean open AI and then just see what the f happens. Because first of all, I think Sam likes chaos. Second of all, just the way that he's speaking about these. This thing is, is, is if this isn't something he would call AGI, then I don't know what it is. Again he says, I sat back in my chair and I was like, oh man, it was a here it is moment. Well, I would ask what is it in a here it is moment? I think they had this with O3. I don't know if you remember, but Tyler Cowan, he said oh, three was AGI. And my conspiratory minded self said maybe somebody whispered in his ear that he should just call it AGI and sort of like clear the way for then someone like Sam ALTMAN to say GPT5 is AGI. And they were restrained and waiting because you had a leading academic who said, okay, well, the previous model fits. That fits that pattern. I don't know. Am I crazy?
Stephen Morris
No, I mean, you're not crazy. I mean, you know, this is the race to get there and, and the, the kind of, you know, the scientific and monetary rewards if you do make it are just astronomical. And these things are coming along very quickly. It's just AGI, there's no agreed definition. Even if you talk to Sam and we do at the ft, he says, I don't even really know what that means anymore. But there's also a legal definition as well. You know, he is the chief executive of a company worth 300 billion. And when you say things, they can often end up in court. I mean, OpenAI said a lot of things about what their company was and its mission, which is now being used against it by Elon Musk as he kind of runs interference around the outsides of this restructuring. For people that don't know, he sued OpenAI, saying it's abandoned. It's a nonprofit mission. Elon Musk was, of course, one of the co founders and one of the biggest financiers at the start. And at the moment it looks like the Delaware district attorney and California district attorney have agreed with him and have said, actually, yeah, they don't look a lot like a nonprofit anymore. If he comes out and says, I think this is AGI, this is how I'm defining it and this is how we're going to prove it. Like, this model is better than most humans at almost all tasks that we give it. Maybe you could make an argument for that. And AGI is a word of particular importance to OpenAI, as we explained at length before because of the restructuring. But a lot of people now are talking about super intelligence, like you mentioned Mustafa before, and Demis, they're talking about building systems that are capable of being far better than the best human at a huge variety of tasks and can't just kind of regurgitate and piece together the sum of human knowledge that they've harvested from the Internet, but can actually come up with new things, new ways of building rockets, new ways of generating power. And I think that's kind of. Did you not feel recently that that's where the goalpost has been shifted ASI from AGI?
Alex
Yeah, it's definitely the new jargon term which has even made it easier for a company like OpenAI to say, hey, you know what? We're going to call this AGI. And here's another quote from the Theo Vaughan podcast where Sam says, GPT5 is the smartest thing. GPT5 is smarter than us in almost every way. You know, and yet we're here, dude. This is, this is it. It's coming. I, I mean, I am. Again, I'm not stating this conclusively. I'm leaving myself open to be wrong. I'll admit I'm wrong if this is what happens. But if he doesn't come out straight up and say, this is AGI, then there's going to be lots of winks to it. I, I think that you're going to see a tidal wave of commentary calling it that when this comes out.
Stephen Morris
I think it's a smarter tactic to let other experts in the field, scientists, rivals Trump, say it for you. And then you kind of move on from that basis and say, hey, look, you know, it's a subjective term. You know, this is how we're. Let's take it to our board and see if they agree.
Alex
Wouldn't it be funny if Trump just came out and truth socialed and said, this is AGI. And Sam is like, well, see, look, the President's saying it.
Stephen Morris
I mean, I wouldn't put it past any of them. Sam is obviously, has obviously managed to get very close to Donald Trump, much to the, at the chagrin of Elon Musk, who initially held his role as the first buddy. But, you know, we've seen Altman in, you know, in the wide, you know, in the Oval Office just days after the inauguration, announcing this huge Stargate project. You know, he's appearing at the President's fundraisers. You know, he's very, very good with politicians.
Alex
Exactly. All right, so in order to keep improving the models, these companies are going to have to build larger and larger data centers. And at the heart of it is this $500 billion push to create a massive data center project with Oracle and SoftBank on behalf of OpenAI. It's called Stargate. But there is some news now that Stargate is hitting some speed bumps, and we're going to cover that right after this. Hey, everyone, let me tell you about the Hustle Daily Show, a podcast filled with business, tech news and original stories to keep you in the loop on what's trending. More than 2 million professionals read the Hustle's daily email for its irreverent and informative takes on business. And tech news. Now they have a daily podcast called the Hustle Daily show where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them. So search for the Hustle Daily show and your favorite podcast app like the one you're using right now. And we're back here on Big Technology Podcast with Stephen Morris. He's the San Francisco bureau chief at the Financial Times. Great having you on, Stephen. Let's talk a little bit about Stargate. So this is from the Wall street journal. The 500 billion dollar effort unveiled at the White House to supercharge the US's artificial intelligence ambitions has struggled to get off the ground and has sharply scaled back its near term plans. Six months after Japanese billionaire Masayoshi Stunning stood shoulder to shoulder with Sam Altman and President Trump to announce the Stargate project, the newly formed company charged with making it happen has yet to complete a single deal for a data center. Sun and Altman's OpenAI, which jointly leads Stargate, have been at odds over crucial terms of the partnership, including where to build the sites, according to people familiar with the with the matter. While the companies pledged at a gen at the January announcement to invest $100 billion immediately, the project is now setting a more modest goal of building a small data center by the end of this year, likely in Ohio. But I think that small data center is like still a gigawatt data center. So small in the scale of what they promised, but still fairly large. What do you think is happening here? I'm really struggling to figure it out.
Stephen Morris
So I remember when I first heard about Stargate, I was in Davos, the big, you know, the big conference of the great and the good over in a small mountain town in Switzerland. And this announcement blindsided everyone. I actually just recently met with the CFO of OpenAI a few hours before and she gave nothing away. And everyone looked at these astronomical numbers like half a trillion dollars, 100 billion initially, power on a scale almost unimaginable. And since then, we at the Financial Times have been trying to work out where this money is coming from, where it's going to be deployed. And just as you can see in the Wall Street Journal article which we've been writing along the lines of as well, it's, it's not clear that this is going well at all. They haven't identified very many sites. The money hasn't fully come in from the huge Japanese investor SoftBank. You know, I guess in any conglomerate that's at the frontier of Artificial intelligence with multiple different, you know, agendas. It's very hard to get everyone on the same page, like, where do you even build these things? Is there the power infrastructure there? This is all complicated by the tense restructuring negotiations as well. But what is very clear is that there hasn't been 100 billion immediately deployed as they promised at that infamous White House. And they're now changing the definition of it. Stargate, of course, is a reference to the, I forget when it came out the, the film that, you know, enable people to time Travel and, and OpenAI called it Stargate because that was the biggest human infrastructure project ever in that fantasy book. And that's what they want this to be. But they haven't actually managed to get it off the ground yet. And that must be somewhat concerning for both the company, but also its investors, because you have very, very big competitors out there, like Meta, Microsoft, Google, who are, who are snapping up land, you know, power contracts. They have deep government relations, both on the state and federal level in the US and around the world. And you're trying to beat these guys at their own game and they're not going to be particularly happy about it.
Alex
So I do want to wink at something that we're going to have on the show next week. I mean, one of the interesting things that you're starting to hear, and I'm curious if you've heard this, Stephen, around AI research labs, is that scaling is back in vogue. There was like a period of time where these labs were like, yeah, we're making these models bigger, we're adding more compute, but we're getting diminishing returns. I think maybe with the advent of what GROK is doing and what Mark Zuckerberg is pushing, we're starting to. There's starting to be a wave of AI researchers believing in this sort of bitter lesson, which is just that you don't really need new methods, you just need more compute. And that's how important this project is. I'm curious, have you, have you heard anything like that?
Stephen Morris
Absolutely. We remember the other thing that happened that week in January when I was in Davos was Deep Seeks release, which was built on the top of other open source products. And they basically said, you can be innovative and you don't have to bruise brute force. This with millions of GPUs all linked together on these vast, expensive training runs. Because remember, one of these training runs goes wrong, it's like a billion down the drain, you know, and you can't necessarily afford to do that too many times.
Alex
Right. That's why Mark Zuckerberg, instead of spending billions on training runs gone wrong, decided to spend it on talent.
Stephen Morris
Yeah, well, I know, I know extensively in your previous editions, but I was chatting to somebody at Google the other day and they were like, look at who's moving. It's not necessarily the most innovative researchers coming up with, like, new ways to do this. It's the people that know how to manage these training runs to make sure that they're successful. You know, and that's part of the reason why Google bought back a car, a company called Character, because the the founder of that, Noah, was able to marshal their training runs of Gemini much more effectively, meaning that they're faster to market and waste less money. But just to go back to that brute force thing like the more chips, more chips, more capacity, better model, that is definitely coming back again. You wouldn't have Meta trying to build a data center the size of Manhattan. You wouldn't have Anthropic talking about, you know, loosening its policies, shall we say, and taking money from the Middle East. And you wouldn't have OpenAI trying to break away from Microsoft and build its own gargantuan data center structure around the world if size in this didn't still matter. Remember, it's not just the training of these models. You've got to run them afterwards. You can't afford to have them get slow or get slow or, or fail and go down because your customers won't be happy.
Alex
And meanwhile, if you think about what's happening with Stargate again, because if this is the key, then Stargate is crucial. This is from Safra Katz, the CEO of Oracle. Stargate isn't formed yet. What this Wall Street Journal story. Altman has used the Stargate name on projects that aren't being financed by the partnership between OpenAI and SoftBank. OpenAI refers to a data center in Abilene, Texas, and another it agreed to in March to use in Denton, Texas as part of Stargate, even though they are being done without SoftBank. And SoftBank, I'm pretty sure owns this Stargate name. It's all very confusing. Meanwhile, OpenAI comes out with some news. It's going to expand its Oracle data center and it's going to develop an additional 4.5 gigabytes gigawatts of the Stargate data center it announced on Tuesday, the day after the Wall Street Journal story. And it looks like it's going to attempt to build 10 gigawatts of new compute through Stargate. I think that was already sort of baked in. But they're sort of signaling through the press that, you know what, there are no speed bumps. Who do you believe?
Stephen Morris
Well, I don't think that I don't. I think there are definitely huge speed bumps like that they're having to go over. That's not to say that they won't make it, but they are finding this more challenging and more difficult than they had. Part of the reason they're able to claim Stargate is off the ground is because they changed the definition of it. Previously it had to involve OpenAI, Oracle, the data center provider, SoftBank, the financier, and MGX, which is a huge new sovereign wealth linked fund in the Middle east that was going to provide a lot of money. Now, OpenAI said any data center that we rent because remember, they don't build it themselves, that's Stargate, which is obviously not what they said in January. So they're kind of. But you remember Safra Katz, chief executive, Oracle. It's just a publicly listed company with shareholders. If they're asked a question by an analyst on a call, they can't lie because they'll be sued. So if Stargate doesn't exist, she has to say Stargate is not formed yet because otherwise she'll open herself personally, but also the company up to all kinds of lawsuits. So you see kind of the difference between being a private Silicon Valley tech company and a listed one. You often get closer to the truth when someone is not put on the stand. But when somebody is asked something, material information in a public context, you can't just say, oh, well, but we're doing all this other stuff and we're just going to kind of change the definition. You're like, no, it doesn't exist yet. Which is somewhat concerning. I mean, part of this is if you look at what they're doing at OpenAI, this is a company that kind of exploded into the public imagination in 2022. You often lose track of the ambition. You know, they're trying to build a device that they won't talk about with Jony. I've. That's not a phone and it's not a headset or glasses. They're building their own data centers. You know, they're building variety of models, consumer apps, a browser. They want to get into shopping, they want to get into agentic commerce online. You know, they, they have huge lobbying apparatus, you know, in Washington and around the world to try and influence policy. And I don't even know how Sam Altman arranges his day to try and keep all of this in his head. But it does feel to me. And if you speak to people that are close to him and advise him, maybe they're doing too much too soon. I mean, I know tech is all like move fast and break stuff or your establishing monopolistic rivals like Google and Microsoft and Meta will come in and sweep the board. But it does kind of feel like OpenAI, you know, is trying to keep a lot of balls in the air at the moment.
Alex
And Sam has a new baby and Sam has found the time to go on comedy podcasts. A joke I made in our discord today was like, people like, how's he going on Theo Vaughn? It's like, well, AGI is doing his work so he can spend the time doing things he loves like comedy podcasts. But a company that we'll call something Stargate, when Stargate is not formed yet, I think they might call AGI or their model AGI. And maybe it doesn't meet the technical definition either. But you know, as these projects get built up, we're going to start to see this massive tax on the grid. And there's some great reporting in the FT this week about what AI is doing to energy cost. Here's the headline. AI Demand Drives Up Electricity Supply Cost in Largest US Market to Record High the cost of providing electricity in America's largest power market market will hit a record high due to soaring demand from artificial intelligence data centers and delays in building new power plants, raising energy prices for consumers. All right, this is going to get a bit wonky. I'll just read this paragraph and turn it over to you, Stephen. Grid operator PJM, which covers 13 states in Washington D.C. said Tuesday it produced an energy supplies for $329.17 per megawatt day, a 22% increase compared with the previous year. The organization will Pay power producers 16.1 billion to meet its energy needs from June 20, 2026 to May 2027, a 10% increase compared with the previous year. It expects a 1 to 5% rise for customers and their energy bills, depending depending on how utility, utilities and states passed on costs. Wow. So this is, this is now really starting to hit the size where it's having a real impact on energy bills. Double digit pricing increases energy. It seems to me the ability to produce it and deliver it efficiently is going to be a major, major source of competitive advantage for whichever country figures it out.
Stephen Morris
Absolutely. That's why you're seeing companies like Microsoft and Google bring old nuclear power plants online and, and strike deals with like these mini fusion reactor companies, of which Sam Altman used to be a major owner of one as well. There is just quite simply not enough power that exists in the United States or around the world or anywhere really apart from China to, to drive these data center ambitions. It just doesn't exist. And this is all linked to top level government policy. Donald Trump made a speech earlier this week about throwing the weight of the US Federal government behind AI infrastructure. But just a few weeks before, he kind of gutted the American renewable energy industry, in particular solar and wind, by taking away various federal credits. This for to a large part is how China is going to power the future of data centers and AI. With solar energy in the US we're actually seeing it take a bit of a step back. There's only so many gas turbines Elon Musk can put at his data center in Memphis to power the thing. What he really needs is a hydroelectric dam or a vast field of solar panels or offshore wind, which is why he's become so agitated with Donald Trump and the big beautiful bill. Not only did it almost destroy Tesla's business model overnight, it also kind of gutted the ability of the US to compete with China on renewable energy, at least in terms of the investment and deployment. So it doesn't surprise me at all that consumers are being hit in the pocket due to the increase in demand for data center power. And it's a real open question about how the hell you're going to power all of this stuff, in particular in states that are really bidding to have these data centers built to create jobs and revenue like Ohio and Pennsylvania and Arizona, Texas.
Alex
Isn't it all just going to end up being nuclear power? I mean, that's where I see it going.
Stephen Morris
It takes a long time to build a nuclear power station and get it online safely. And while a lot of people might be pro nuclear power, I'm not sure how many people are pro living next to nuclear power. So you've got kind of the nimby not in my backyard coming in. I mean, we've, you know, you've had nuclear disasters in very advanced companies with good track safety records like Japan very recently and I think memories are still strong of that, that. But I do think there's a big place for the next generation of nuclear power in playing this because, I mean, you know, the coal and oil and gas just won't last forever.
Alex
It's so funny you mentioned NIMBY because I'm reading this Reuters report about the President, President Trump's plan to Expedite AI development and lift some export controls and make sure that the data centers can work. And this. Pray the president's going full abundance. This is from the story. The plan calls for fast tracking the construction of data centers by loosening. Loosening environmental regulations and utilizing federal land to expedite the development of the projects, including any power supplies. Looks like. I mean, it's. I'm conflicted about this. I'll be honest. I mean, I've heard from AI lab leaders who are happy about the fact that they're going to have the energy to be able to produce this. On the other hand, I'm not excited about federal land being used and being used to do this. That's, you know, I mean, call me an idealist, but that's the people's land. And the idea that you're gonna fast track and, you know, potentially move. I mean, it's true, the abundance guys, the Republicans, they all have a point that, like, it's too hard to build in the U.S. but if you, like, sort of disregard the Clean Water act, maybe that's too strong of a word. But if you blow past it, I am concerned of what the consequences are going to be there. Yeah.
Stephen Morris
To return to the ethos of drill, baby, drill, isn't it, you know, just get these products build, you know, get these new sources online, build as much as possible. I guess the argument for people that believe in abundance and we're just super intelligence is just around the corner is these technologies will help humanity find ways to capture carbon from the atmosphere or build. Build and use things more efficiently. And that as we become more accustomed to running these data centers, you can use them more efficiently over time. But certainly the mass appropriation of federal land opening up, disregarding any environmental rules doesn't seem, not as an American or a voter, but doesn't seem like the best public policy to me.
Alex
Ilya Sudskever, one of the former Chief scientists of OpenAI, now the guy who's running safe super Intelligence, he's had this vision that, like, the world is going to be wallpapered with data centers as the scaling laws continue to show results. And I look at some of these pictures of these data centers and I'm just like, oh, my goodness, this is the vision being lived out.
Stephen Morris
Yeah, well, with the ability to build a data center, you know, Elon Musk has shown you can do it in months, not years. He's also shown a willingness to disregard local planning laws and environmental laws, which Memphis seems only too keen to help him do in order to make sure that he builds there and not anywhere else. But it's a race and you look at countries, jurisdictions where it's harder to build. Like from my personal experience, the UK and the EU, if you take 10 years to build something, Elon Musk is building in three months, China is building in six and Microsoft is building in one, there is going to be a little bit of a gap, especially if you want your sovereign AI and you want your, you know, your citizens data stored, stored and processed locally. It is a. The concern in the other direction is that Europe will just fall so far behind these other countries it won't have an AI industry that's meaningful in any sense, which may already be the case.
Alex
Yeah, unfortunately it does seem like it's trending that way. So I do want to talk about this, this bake off between Google and OpenAI on the international Math Olympiad. It was kind of this funny story where like both of these companies said that they had achieved gold, gold medals in the International Math Olympiad competition. But like Open AI didn't officially participate. So it announced first and then Google officially participated, it announced second. But it has like the actual gold medal status because it was part of the competition. This is from the New York Times. This was the first time a machine which solved five of six problems at the 2025 competition reached this level of success. They're talking about the Google machine. The news is another sign that leading companies are continuing to improve their AI systems in areas like math, science and computer coding. And I just want to make sure I get this right. This is a large language model. This is a chatbot with a reasoning system that is getting gold on the International Math Olympiad, not a purpose built AI system that is used for solving these math problems. So the idea that you could have a chatbot go into the Master Math Olympiad and win gold once and perhaps twice for the first time. I think we talk often about how benchmarks are unreliable and all that stuff, but I think this is a pretty good indication that this stuff continues to make progress.
Stephen Morris
Yeah, I mean it's a spectacular achievement. It's quite funny after I moved here just over 18 months ago, so I'm getting to know how Silicon Valley works and how the sausage are made and who competes with who, hates whom. And OpenAI and Google have a long running battle trying to release products a few days ahead of each other to kind of show the other up. I mean it happens frequently around Google's IO event, which was a few months ago. And so Google had followed all the right. Procedures. DeepMind, it entered its model, it was in controlled, controlled circumstances and obviously it was pretty confident it was going to win a gold medal and it was going to be good PR. And then you have OpenAI using the same system, not officially entering, not subjecting itself to the same type of controls and scrutiny and front running Google and announcement by three days and again, like kind of soaking up all of the good media as a result of that. Now this little sort of behind the scenes competitive doesn't take away from the achievement that either, either of these have made. And yeah, Maybe they run GPT5 through the mass Olympiad, it'll do even better.
Alex
Yeah, maybe they could go six of six at this time and enter officially. But I just thought it was worth bringing up because it does show that all these things that we thought. It's crazy, right? This is a predict the next word engine with like a little bit of new techniques applied to allow it to reason and go step by step and it's winning the gold in the Math Olympiad. It's totally crazy. And you think about the applications that it could be, it could be put into or used for with this type of math skill. We're at this point where companies are all trying to figure out like, is there enterprise use case for AI? And I think if it's able to do abstract math, it's able to get these problems right. You could start to see it having a real impact in industries like finance and maybe even like the sciences.
Stephen Morris
Absolutely. Well, you mentioned Mustafa Suleiman's AI diagnostic tool. I mean, we both interviewed him about that a few weeks ago. And what they did is they took the various models because they used OpenAI actually performed best, but they had a few others from Google and Anthropic too. What they did is they almost created a team of doctors from reasoning models that would talk to each other and question each other and make them go back to the source material. And what they were able to do is take the most complicated sort of house level, the TV show house level, weird medical ailments that affect one person every six years and they were able to diagnose them at a rate like four times more successfully and much faster and much cheaper with fewer tests than human doctors. So we're starting to see very real world use cases from this across the sciences, across mathematics. And I used to be the banking editor for the FT in London covering financial services. They are extremely interested in this technology and what it could do in terms of turbocharging their returns. And if we're going to you look at the dystopian side a little bit more, how they could cut their headcount and increase their margins.
Alex
So I want to talk very briefly about a very interesting part of this Diagnostic Orchestrator. So it actually did improve performance. I'm sure you caught this. It improved performance by Forex over doctors who didn't have like access to Google and, and I guess their colleagues. But when you looked at the fine print, this orchestration system helped, but performed not Forex better than traditional LLMs but like a few percentage points better than traditional LLMs and actually not that much better than reasoning systems. And it was like, on one hand, cool, okay, AI is able to do this much better than doctors. But on the other hand, I think another, if you wanted to take a contrarian look at what was going on, it's, it was that the very basic level of AI is itself three times better than, than your average doctor who's given some constraints, which is crazy.
Stephen Morris
Yeah, I felt it was a bit unfair to test doctors who weren't allowed to talk to their colleagues. Refer to medical search online. You know, it, they, you're kind of, okay, you beat a human with a lot of the, you know, the most impressive, you know, bits about, you know, modern medicine, I. E. The diagnostic process and putting in their colleague. But you know what? Microsoft, Microsoft, you know, I'm not going to, you know, are not going to like, put out a product that doesn't have impressive results, are they?
Alex
No, no, I don't think so. All right, let's, let's breeze through a couple of these earnings reports. We do have the San Francisco bureau chief of the Financial Times here, so I feel like we should make use of your expertise as we're starting to see some big tech earnings come in. So I'll give you one question each on Alphabet and Tesla and then I want to get to this memo that Satya Nadella wrote about layoffs at Microsoft and then we'll get out of here. So first of all, Alphabet, it's amazing. Despite the rise of generative AI, this company is continuing to crush. We will have, by the way, their head of search and information and knowledge coming on the show in a couple weeks. So folks, stay tuned for that, but here's the numbers from your story. Google's core search and advertising business grew 12%, beating expectations for a 9% rise. So even as, as generative AI continues to take share, you would imagine in search, everything's going up and Google's figuring it out. And not only are they growing, they are growing double digits and they're beating expectations. What is happening there? It's very impressive.
Stephen Morris
Well, I think the reports of the demise of Google have been greatly exaggerated. They had a bit of a wobble in 20, in 2024, but they do seem to be very much back in the game. I mean their growth is impressive. Every quarter it's double digit growth on you, tens of billions of revenue. I mean, for most other companies in this world, these numbers can only be dreamed of, but they are showing is that the way that they've integrated AI into search, whether it's the overviews, the bullet points at the top of your results or the AI mode which you can click on and then it just behaves like GPT, Gemini or, or Claude, it's actually boosting engagement. People are searching more, which, you know, the more people search, the more ads that Google can actually show and the more money they can earn from that. So that's really what we're suing, it's what we're seeing. Google's argument has long been, yeah, we're going to lose some traffic to GPT, but the pie overall is going to grow. So maybe we don't have 91% of global search queries anymore, maybe we have 84%, but if the actual overall pie of queries increases, there's actually still more lucrative to them. Now, whilst the results are very impressive, there's a huge, you know, there are huge clouds on the horizon for Google. We're waiting for the results of the search antitrust remedies which could see Google have to do several important things. It could have to sell its Chrome browser, which I'm using right now, to arrival. We could see a hated rival like Perplexity or OpenAI buy that and have one of the best distribution methods for their AI technology in the world.
Alex
World.
Stephen Morris
They're going to have to share more data with rivals. They're going to lose the right to be Apple's exclusive search engine provider on Safari across its devices and they could see their business hamstrung in a variety of ways. So whilst the results are very good for Google, they come with a big asterisk on the fact that even the Trump antitrust administrators are still going after them and want to see them broken up. So I wouldn't say that's why we saw the shares not jump 10% but just a couple of percent because everyone's waiting to see how this antitrust stuff lands on them.
Alex
I'll just make one snarky comment. It's perhaps easier to make money when you just ingest the entire web and the entire experience happens on your platform versus having to send people to the pesky websites with the information.
Stephen Morris
Exactly.
Alex
Okay, let's talk about Tesla. Tesla had a very bad earnings report. We knew this was going to happen. But the news that I think the FT picked up on and was to me the right thing to look at was the outlook. And the outlook is bad because this big beautiful bill has cut off EV credits and you would get a $7,500 credit it to buy EVs in the US. Not only that, the biggest source of profit for Tesla has been these regulatory credits. So because they produce EVs, companies with mandates to produce EVs that don't meet those standards are able to buy credits from Tesla and effectively, you know, wink, wink, meet the standards and, and Tesla and that, that is, if not going away, it is vastly diminished. They say. This is your story. The revenue from the credits almost halved to 439 million in the quarter from the year before. Last year the company made 2.8 billion from these sales. And these of course are straight profit. So dark times ahead for Tesla, very dark times.
Stephen Morris
I mean Tesla was worth 1.54 trillion on the 17th of December. It's now worth around 900. So we're talking about more than half a trillion of market cap wiped out. It rose to a peak because people are optimistic Musk's relationship with Trump would allow Tesla, would allow Musk to help shape policy, maybe soften Trump's opposition to electric vehicles. I don't know about you. We had a sweepstake in the office how long it would take Trump and Musk to fall out. I was vastly over optimistic at six months. A few of my colleagues said two, three, four. It ended up being four and a half, half. And now we are really seeing the shit hit the fan with regards to Trump, the Republicans and renewable energy and electric vehicles. They do not like them. They listen to the other lobbies far more than them. And it's hard to actually imagine a worse set of policies for Tesla coming out of the big beautiful bill than what emerged. As you said, tax credits gone. Tesla would have actually made a loss in the first quarter if it had not been for selling regulatory credits. And what, what Trump administration has done is they haven't got rid of these emission trading systems. They've just said if you don't abide by them, these emission standards, the fine for non compliance is zero. So if the fine is zero, why would you Bother complying. So Tesla, the market is essentially going to dry up whilst the system technically still stays in place. So it's quite a cunning way of attacking it without actually having to go to Congress and change the rules. And Musk sounded, I don't know, he's bounced back from a lot before, but he sounded different on the Tesla earnings call. There was a lack of energy. He was very resigned. He said, we're going to have some rough quarters ahead. And even when he's talking about building millions of humanoid AI powered robots or unleashing a fleet of billions of robo taxis around the world, his heart clearly wasn't in it. This is. And I know Tesla and Musk himself has a lot of fans. Every time I write a story which might suggest this isn't the greatest company in the world, they get like a torrent of online abuse from the teslarati out there.
Alex
But as you deserve.
Stephen Morris
As I deserve, especially Tesla boomer mom or whatever she's called. But we were right to point out that there were these, just like Google, there are these big clouds on the horizon. It's just this seems more existential to Tesla because remember, the way you get to armies of humanoid robots and robo taxis is by actually making money. And you make money by selling cars and selling credits. Musk has alienated a lot of his traditional client base in Europe and America and around the world by championing these right wing causes and appearing on chainsaws, hacking the federal, appearing with chainsaws at Doge, hacking the federal government apart. So it'll be really interesting to see what they do. I mean, the board has a big decision to make, make about Musk and his leadership of Tesla. I mean the company, it is him. You invest in it on optimism that he will position this company best from the future. But there's no delaying that. Through his politics and his fallout with Trump, he's become a bit of a liability to the company.
Alex
That is true, but I just couldn't see them going in any other direction. Especially because Elon has proven time and again that when his back is against the wall, he finds a way to figure it out. Although his back is really against the wall on this one.
Stephen Morris
We reported that far from looking for a new CEO, the board is actually looking at giving him a new pay package. Because you remember most, most of his pay got canceled by a Delaware court, leaving him with only 13% of the company, as opposed to about 20. And he said something quite interesting on the earnings call when he was asked by an analyst do you feel comfortable developing AI and these robots with only 13 control? And he said, that's a major concern for me. I've got so little control I can easily be ousted by activist shareholders after built built. Having built this army of humanoid robots, my Tesla, my control over Tesla should be enough to ensure that it goes in a good direction, but not so much control that I can't be thrown out if I go crazy, were his words. Now, obviously going crazy has been a great strategy for Musk in the past, doing things successfully that others said were impossible. But that's a warning to the board. He's like, give me more shares, give me more control. Or maybe I leave and focus my whole attention on X and Xai and SpaceX and then where does that leave Tesla?
Alex
Yeah, it would be in a terrible place, I would imagine, because the valuation is, is, is not based off of the car business. It's based off of everything that might come next. All right, let's end with this story, which I found interesting. Satya and Emma. Satya Nadella felt it important to write Microsoft employees about the layoffs, the morale, the culture and the fact that this company is worth 3, oh, almost 4 trillion, 3.84 trillion and, and is laying people off, which is like truly, really. I'll just say it's crazy. Here it is from GeekWire. In a company wide memo, Nadella acknowledged that what he called the uncertainty and seeping incongruence, incongruence of Microsoft situation, even with its recent job cuts, he wrote, is thriving by every objective measure with strong performance, rapid cap, rapid capital investments, and relatively unchanged overall headcount due to ongoing hiring. Nadella pointed out that some of the talent and expertise in our industry and at Microsoft is being recognized and rewarded at levels never seen before. And yet at the same time, we've undergone layoffs. And here is to me the most interesting paragraph. This is the enigma of success in an industry that has no franchise value. Progress isn't linear, it's dynamic, sometimes dissonant and always demanding. But it's also a new opportunity for us to shape, shape, lead through and have greater impact than ever before. And yet, as I read Satya's words, I honestly cannot tell you why he felt the need to lay off 10,000 plus people in the recent months. What is happening here now?
Stephen Morris
Nadello is, you know, a very savvy man. I mean, he, what he, what he did at Microsoft was essentially take it out of its various failed consumer enterprises and really refocus it on enterprises, I. E. Corporations and businesses and data centers, Azure and that's worked out extremely well for him. But he has cut his way to success for out of the consumer business, giving him the capacity to invest in the other side. What they're seeing now is that it they're going to be spending a lot more money. They like all the other the tech companies are spending tens if not soon to be more than 100 billion on infrastructure every year. They also have shareholders to appease, who want dividends, who want to see the share price continue to go up and you've got to make the sums add up so you have to take some of it out. So he's gambling that there are some non AI native people in the company who can be replaced either by AI systems themselves or, or you can bring in cheaper younger people that are better able to infuse this technology through the company and shake it out of its own ways. I do think headcount at Microsoft has actually stayed like roughly level. So whilst you've had a lot of these layoffs, they've clearly been hiring a lot of people as well. I think Mustafa Suleiman now has 6 or 7,000 people reporting to him and just this week I reported that they poached another 23 from DeepMind. You know, these people, as you well know, do not come cheap. Cheap, right. You know, at the top end they're getting 1,200,300 million. At the bottom end they're still incredibly well compensated and it has to come from somewhere. But by sending a memo and saying it's weighing heavily, that tells you another story about morale inside the company. You know, if you're doing, you know, 15,000 people is a lot, you know, especially in, you know, you know, in a smallish place like Seattle, to send something like this to try and put a more human face on it tells you that people at Microsoft are not happy.
Alex
Absolutely. And I think a lot of people are pointing to a couple things. First, it seems like the games division of Microsoft has taken a massive hit. Probably disproportionate. I guess if you're going to invest somewhere I would invest in AI over games. Second, there's this thing that keeps popping up in my mentions that it's not Microsoft, Microsoft, you know, cutting staff, it's basically reallocating its budgets and offshoring. Do you what do you think about the offshoring idea that basically just hiring the same people just in countries with lower cost labor?
Stephen Morris
Absolutely, that's happening. I mean you there, I can't remember I read a stat the other day, I may be wrong that There have been 70 or 75,000 layoffs in US tech recently and those jobs haven't disappeared. Some of them have been replaced by AI, but a lot of them have moved to lower cost jurisdictions abroad. There's also been a very big trend at big tech companies to prefer contractors over full time staff. Contractors don't get free massages, they don't necessarily get the same levels of healthcare. And whilst compared to Europe where I'm from, there are basically no employee protections out here in the States, there are more. If you're a full time employee, like you're owed redundancy. Whereas if you're a contractor working for tech Mahindra or someone like that, you know, based remotely or out in India or Malaysia, you are far, far less of a burden on the company if ever there's a slowdown and they need to cut costs. And you're also far more flexible in terms of moving onto different projects. So I feel bad for all the people that have lost their jobs on Microsoft, but we should emphasize it's not just them. This is part of a broader trend across the technology and wider industries and I think we should be bracing for even more of this as we start to see people trust AI, do jobs that humans were relied on for, you know, for, for long periods of the past.
Alex
Yeah, I think you're right. Folks, be careful. Those massages, they come with a hidden cost that you may not be fully appreciating at the time though. Enjoy them, enjoy the massage. Don't, don't feel bad as it's happening because gosh, that's a very nice perk. So Stephen Morse, thank you for joining us. Where can people find your work and that of your team?
Stephen Morris
You can find us. Yo, I think one of the FT is one of the most expensive newspaper, newspaper subscriptions in the world, but we're definitely worth it. Find us on our website, mainly our on our app and occasionally posting on social media as well.
Alex
All right, well, thank you Stephen for joining. Thank you everybody for listening and watching. We will be back on Wednesday with I think what's going to be the best interview of the year here here on Big Technology Podcast. So we hope you stay tuned and we'll see you next time on Big Technology Podcast.
Big Technology Podcast: Countdown to GPT-5, OpenAI’s Stargate Sputters, AI Math Wars
Release Date: July 25, 2025
Host: Alex Kantrowitz
Guest: Stephen Morris, San Francisco Bureau Chief at the Financial Times
In this episode of the Big Technology Podcast, host Alex Kantrowitz sits down with Stephen Morris from the Financial Times to delve deep into the latest developments in the AI and tech industries. The conversation covers the anticipated release of OpenAI’s GPT-5, the struggles of the ambitious Stargate data center project, the competitive landscape of AI models, significant earnings reports from major tech giants, and the broader implications of AI advancements on energy consumption and employment.
The episode kicks off with a discussion about the highly anticipated GPT-5 model from OpenAI. According to reports from The Verge, OpenAI is preparing for a potential August release after initial delays.
Stephen Morris [01:51]:
"There is a lot going on with this company at the moment... the hype is because it has one of the best, if not the best models and we really have that necessary. That hasn't necessarily been true for a while."
Sam Altman, CEO of OpenAI, has hinted at groundbreaking capabilities for GPT-5, sparking significant excitement and skepticism. Early user feedback suggests improved performance across various domains, including hard sciences, software engineering, and creative writing.
Alex Kantrowitz [05:49]:
"GPT5 is almost here and we're hearing good things... improved performance in a number of domains, including the hard sciences, completing tasks for users on their browsers and creative writing."
Morris emphasizes that GPT-5 could mark a significant leap forward for OpenAI, especially in coding capabilities, which have been a competitive edge for rivals like Anthropic’s Claude.
Stephen Morris [07:50]:
"If GPT is the slam dunk that Sam Altman is telling everyone it's going to be, that will kind of put those kind of doubters and those questions to write."
The conversation shifts to the Stargate project, a $500 billion initiative aimed at building massive data centers to support AI advancements. Initially announced with high hopes, the project has faced significant delays.
Alex Kantrowitz [24:07]:
"The 500 billion dollar effort unveiled at the White House to supercharge the US's artificial intelligence ambitions has struggled to get off the ground..."
Stephen Morris [26:15]:
"But part of the reason they're able to claim Stargate is off the ground is because they changed the definition of it... they're trying to keep a lot of balls in the air at the moment."
Stargate’s challenges are attributed to disagreements between OpenAI and partners like Oracle and SoftBank, along with the complexities of scaling such an enormous infrastructure project amidst fierce competition from tech giants like Google and Microsoft.
AI advancements necessitate substantial energy resources, leading to rising electricity costs and environmental concerns. The episode highlights how AI-driven data centers are driving up energy demand, particularly in the US.
Alex Kantrowitz [34:35]:
"The cost of providing electricity in America's largest power market will hit a record high due to soaring demand from artificial intelligence data centers..."
Stephen Morris [36:29]:
"There's just not enough power that exists in the United States or around the world to drive these data center ambitions... it's linked to top-level government policy."
Discussions also touch upon the potential reliance on nuclear power to meet energy demands, although concerns about safety and public acceptance remain.
A notable highlight is the competition between Google and OpenAI in the International Math Olympiad. Both companies claim their AI models achieved gold medals, showcasing the rapid progress of AI in complex problem-solving.
Alex Kantrowitz [40:31]:
"OpenAI and Google have a long-running battle trying to release products a few days ahead of each other to show the other up."
Stephen Morris [43:16]:
"This shows that AI systems are not just regurgitating information but genuinely solving complex mathematical problems, signaling significant advancements."
The episode reviews recent earnings reports from Alphabet (Google) and Tesla, highlighting contrasting fortunes.
Despite fears that generative AI might erode its search dominance, Google reported a 12% growth in its core search and advertising business, surpassing expectations.
Stephen Morris [48:00]:
"Google's way of integrating AI into search, whether it's the overviews or AI modes, is actually boosting engagement and ad revenue."
However, Google faces ongoing antitrust challenges that could impact its business operations and market dominance.
In stark contrast, Tesla reported a disappointing earnings outlook due to the discontinuation of EV tax credits, which significantly impacted their regulatory credits revenue.
Stephen Morris [51:43]:
"Tesla was worth $1.54 trillion, now it's around $900 billion... the relationship with Trump and the resulting policy changes have been detrimental."
Elon Musk’s wavering energy and the company's reliance on regulatory credits pose serious challenges for Tesla’s future profitability and operational stability.
The discussion moves to Microsoft’s recent layoffs and the broader implications for the tech workforce.
Alex Kantrowitz [56:18]:
"Satya Nadella wrote a memo addressing layoffs and company morale, highlighting the paradox of strong performance amidst significant job cuts."
Stephen Morris [58:01]:
"Microsoft is reallocating budgets and offshoring jobs to reduce costs, reflecting a broader trend in the tech industry towards automation and cost optimization."
Morris notes that while headcount numbers might remain stable, the quality and localization of jobs are shifting, raising concerns about employee morale and job security.
The episode wraps up with reflections on the transformative impact of AI on various industries, the competitive dynamics among tech giants, and the societal challenges posed by rapid technological advancements. The guests emphasize the need for strategic investments, sustainable energy solutions, and thoughtful leadership to navigate the evolving tech landscape.
Stephen Morris [01:51]:
"What Sam Altman has said about the models is that you shouldn't have to pick which one you use yourself for a variety of different tasks."
Alex Kantrowitz [05:49]:
"GPT5 showed improved performance in a number of domains, including the hard sciences..."
Stephen Morris [07:50]:
"If the coding aspect lands right with this, I think that could be quite transformational for its business model."
Alex Kantrowitz [34:35]:
"Consumers are being hit in the pocket due to the increase in demand for data center power."
Stephen Morris [48:00]:
"Google's integration of AI into search is boosting engagement, meaning more ad revenue."
Stephen Morris [51:43]:
"Tesla's reliance on regulatory credits has made them vulnerable to policy changes."
Stephen Morris [58:01]:
"Microsoft is reallocating budgets and offshoring jobs, reflecting a broader trend toward automation."
This episode provides a comprehensive overview of the current state and future prospects of AI and big tech. From the promising yet uncertain debut of GPT-5 to the monumental challenges faced by OpenAI’s infrastructure projects, the competitive strides of AI in academia, and the financial oscillations of tech giants like Google and Tesla, listeners gain valuable insights into the rapidly evolving tech ecosystem. Stephen Morris’s expertise from the Financial Times adds depth to the analysis, making this episode an essential listen for anyone keen on understanding the forces shaping the future of technology.