
Loading summary
Alex Kantrowitz
OpenAI's windsurf deal is off and the executive team is going to DeepMind. Elon Musk's Grok had one hell of a week. Nvidia becomes the first $4 trillion company and should Apple replace Tim Cook as some analysts are suggesting? That's coming up right after this. Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional cool headed and nuanced format. Boy, do we have a treat for you because today Boy Box CEO Aaron Levy is joining us to break down this week's news. And we have a full slate, a more than full slate because OpenAI's windsurf deal is off and the team there is going to Google DeepMind. We can also talk a little bit about Grok, the ups and the downs. Big ups, big downs. Nvidia hitting 4 trillion. And then of course these rumors that Apple wants to replace Tim Cook. So great to see you as always, Aaron, welcome back to the show.
Aaron Levy
Good to be here. What a week in tech.
Alex Kantrowitz
Absolutely crazy. Let's just start with the big headline first. This just dropped right before we started recording. OpenAI's deal to buy Windsurf is off and Google will instead hire the Windsurf CEO Varun Mohan, co founder Douglas Chen and some of Windsurf's R and D employees from the company to join DeepMind. Google and windsurf announced Friday. So Aaron, can you tell us about the significance of this Windsurf? What is Windsurf? What, what deal were they gonna have with OpenAI? And what is the significance that that deal is off and they are moving instead to Google?
Aaron Levy
Yeah, in this industry at this point you never get even just like one piece of news. It's always multiple pieces of news embedded in one major thing. So this is sort of a multi, multi part announcement, I guess. So Windsurf has been one of the faster growing AI coding platforms. It's an IDE that is built off of VS code that have agents that automate your coding and quite successful, particularly in the enterprise. They were one of the first to really kind of nail the enterprise oriented sales motion. Lots of protections for data and your code base that they had a good fit on. Vroon's a fantastic kind of founder entrepreneur and the expectation I think was that they were going to be acquired by OpenAI, help OpenAI really kind of boot up their coding efforts and clearly that's now off. Obviously the rumor in that process was there was some structural issues with maybe the Microsoft terms and different parts of that deal. No one has ever kind of explained exactly the problems there. But now with that deal off and Vroon and team going over to Google, it's a recalibration of the market. It's actually interesting. So the, the thing that everybody should have been thinking this entire time actually was where is Google in AI coding? Because right now you have Anthropic, from a model standpoint, tends to be seen as the leading model for coding. And then they also now launched Claude Code, OpenAI launched Codex, which is a very strong kind of offering in an agentic coding experience. And so the odd man out there is Google, where Google is a very deep engineering centric organization. And so one would have imagined that they would want to be front and center with AI coding. The Gemini 2.5 model is seen as very good at coding, but again, it's in this a little bit of no man's land because they neither have an IDE nor do they have what tends to be the best coding model from Anthropic. So they had to do something in this space. And this is a pretty exciting move to launch into.
Alex Kantrowitz
So the politics that you talk about between OpenAI and Microsoft, I am just going to imagine that Microsoft has GitHub Copilot, which allows you to do a lot of this AI generated code thing. And the fact that it's invested all this money in OpenAI and has access, proprietary access to OpenAI's models. Probably not such a fan of OpenAI going out and building a competing product.
Aaron Levy
Yes. Although it's not obvious to me what leverage they have in that dynamic. So I think everything at this point is basically just rumors and conjecture. It's very clear that's what we do.
Alex Kantrowitz
Best on this show.
Aaron Levy
Yeah, exactly, sure. Or basically our entire industry at this point. But I think, I don't perceive that OpenAI is constrained by anything strategically at this point vis a vis the Microsoft relationship. So I doubt, I think the rumor was more. There was things like IP issues and other dynamics with the acquisition. But again, it's all rumors, so impossible to know. These deals fall through for a variety of reasons, but I would not be surprised if OpenAI continued their motivation for needing to be in this space more aggressively. And so I doubt this is the last that we hear from OpenAI on either IDEs or, or coding in general. And certainly they're very committed on the Codex side. And people have had great experiences with Codex, their AI agent.
Alex Kantrowitz
Now people might look at this and be like, well, this is just continuing a pattern of OpenAI running into drama anywhere. It Goes, is it getting concerning at this point? From the outside it looks like it is.
Aaron Levy
I think it's fine. They are somehow juggling building some of the world's largest data centers. Massive, massive, massive energy needed for that. Massive GPUs. They're acquiring Jony I've's company. They are releasing models at an incredible cadence. The rumor next is an open source model. So I think they have probably 50 different things going on, this being only one of those activities.
Alex Kantrowitz
Speaking of speculation and conjecture, what percentage of all AI spend right now, all generative AI spend do you think is going to coding? Because the way that it's talked about, I mean if you think about just anthropic growth over the past year, yeah, I would be stunned if it wasn't more than 50%.
Aaron Levy
Oh yeah. For all, all AI tokens in general. Yeah, I think that's, I think it'd be fun to look at a graph of this. I mean it tends to be one of the highest volume, you know, kind of if you look at a human, like what's the relationship between a human and the amount of AI that they can consume? Coding absolutely would be the peak use case right now. There's no other human task that one person could cause so many tokens to be produced. Deep research is great, but you do it maybe once or twice a day and it's relatively confined, summarizing information, very efficient, not very token heavy. So coding is definitely the one that is this incredible. One person could cause thousands of dollars per day of GPU expense if they really want it. So I think this is going to be the killer app for the foreseeable future in terms of just sheer volume of tokens. This is why it's such a big prize. Google again. It's funny actually, just timed well with your Sergey and Demis interview with Sergey back. There's these little small nuances that you probably don't want to overly extrapolate from or anecdotes that you don't want to overly extrapolate from. But I think Sergey being back at Google is a very interesting thing to consider around how, you know, this is a company that has a great operator in Sundar, an incredible, you know, AI innovator in demos, Jeff Dean, you know, deep on research and science. And then now this kind of hardcore founder in Sergey, like this is not a company that is going to lose the coding battle. Like they like, there's no way that Sergey is sitting around being like, oh, I'm going to use anthropic to to code the next version of a feature I'm building, he has to make sure that they are using Google's technology. That is obviously a point of pride for any founder is to make sure that you're building the technology that you're using for the domains you're going after. And so you have to imagine how committed they are to solving this problem. And Varun now and team are going to be now one more way to accelerate that.
Alex Kantrowitz
Maybe that's the reason why Mark Zuckerberg is deciding to spend billions of dollars on AI talent, seemingly because they started using Sonnet, I think, for coding. So he realized there was a problem there.
Aaron Levy
Well, it's not crazy. Yeah. I mean, think about all of these founders, right? Greg Brockman at OpenAI, Sergey Zuck. I mean, they don't want to walk around their office and find out that the thing that everybody's really excited about is somebody else's model. So, like, that is, like, that would be like if you worked at Facebook and everybody was on X all day long and not using Facebook. So these things are very major points of pride for these founders, which makes the race so exciting to be watching.
Alex Kantrowitz
Yeah. Back when I was reporting on social media, whenever there was a trend on Facebook, when they had their trending column that originated on Twitter, they would never say, people are talking about this on Twitter. They'd say they're talking about it on social media, and then point back to Facebook posts of people, people talking about the Twitter thing. So I think that really goes to the hubris of these companies. And just to put a finer point on what you were saying, for those who are listening and are maybe more on the financial side, not that technical, you're paying for every. It's generative AI, so you're paying for tokens, the characters that these machines generate. And so when you say, build me a web app, it's just tens of thousands, if not hundreds of thousands of tokens. And that's why we're seeing people spend this much money and coding.
Aaron Levy
So I'll give you an example of how crazy this gets. I was talking to a founder this week where, I mean, like, I, you know, every day I see something that I'm just like, I have to completely reassess my estimation of the future. This. This founder is right now a solo founder. He has many different. I don't know if it's 5 or 10, whatever the right number is, many different agents in the background going off doing individual parts of his code base, as well as the marketing kind of website that he has to build for this product that he's working on. And so he is effectively as a solo founder, a manager of multiple agents doing all of this work. And then his job, and basically the new form of engineering work out there, is to come up with incredibly precise prompts that are super tuned for his use case and then kick off all these agents in the background that are going off and doing work, and then he goes and reviews their code and he integrates that code into the broader code base and then effectively is they're reviewing and auditing all of their work. The reason why that's so impactful or meaningful is that one person could literally be causing tens of thousands of dollars a month in AI consumption because of just the single actions that he is doing. So while that's not going to be the behavior of everybody on the planet, that is a massive force multiplier of human to compute ratio that we've just never seen in computer history.
Alex Kantrowitz
All right, so OpenAI isn't the only one making headlines this week. There's been some crazy stuff happening with Grok. Both a new model, Grok 4 and the behavior of GROK has been disappointing, shall we say. So let's start with the actual new model first and then we'll talk about the alignment issues. So Elon Musk builds this massive GPU cluster in Memphis called Project Memphis. He calls it Colossus. I think that was the name of this GPU cluster. And we finally see, I think, the first model that's built on top of it, Grok 4. This is from Tom's Guide. Grok 4 is live. He says that it's going to be expected to rival OpenAI's GPT5, which we still don't know when it's coming. Claude's for Opus, you have artificial analysis. This is a benchmarking firm. They basically say that GROK is blowing away all these different benchmarks. And then of course, in the ARC AGI test, it outperforms every model by a significant margin. Some have said maybe these benchmarks are, you know, maybe Grok is just benchmark hacked or you can't believe them. But it seems like there's enough evidence here that there's a chance that making this GPU cluster massive has worked for Elon Musk. What's your read on it?
Aaron Levy
Yeah, I mean, I think it is working empirically. Obviously you can, you can, and we saw this with Meta a little bit. You can sort of train your models to, to perform better at some of the evals or the benchmarks, which then somewhat can delude you into thinking that the model is better than it is, where it's really just better at these kinds of tests. However, right now I think most of the evidence is that this is a very high performing model kind of across the board. It continues to align with the theory that more compute, more data generally is going to produce better models. And then they're doing some novel things that I think are emerging across the industry. But maybe this will be the first kind of real commercial model at scale that does this. But they have a model called Grok 4 Heavy that has multiple agents go off and execute basically the same task and then they go and review their answer for which answer these agents think are the best result. And so this is a great example of how you can have a lot of compute in the training process, but then also have lots of computer in the inference process where you just have the model working harder and harder and harder to produce better answers, which is clearly producing great results. They show the scores of what Grok 4 Heavy can produce and I think that will become a standard across the board. So I think it's absolutely a continued improvement in model quality, model performance. And we're super excited that this, the scaling laws are continuing to play out and this is just more evidence of that.
Alex Kantrowitz
Well, it's interesting I want to talk with you about the scaling laws because we've had a number of folks come on this show and say, yeah, we're seeing diminishing returns. It's not nobody's. Thomas Kurian, CEO of Google Cloud, said it pretty much straight up a couple weeks ago. Now it seems like it's been tested where Elon said, I'm just going to win on scale. And he makes what is, I think, the biggest GPU cluster in the world. And it looks like it is producing. One of his engineers, a guy named Uday Rudejaru, he's left and he's going to OpenAI to work on Greg Brockman's scale team. And I messaged him after he left and I said, do you believe in the scaling laws after what you've seen? And he says, yeah, the more GPUs, the better. And it looks like that's what they're showing. So what makes for this disconnect between everybody yelling diminishing returns and what we're seeing now, which is like, maybe that's not the case?
Aaron Levy
Well, I would kind of say that both can be true. Diminishing returns is first of all, it's a relative concept. So Diminishing relative to what rate. But I think the way to think about it is if you think about a curve that eventually sort of asymptotes, all that matters is where are you in that curve. So if a curve is sort of like this and it's asymptoting right here, well, if we're right here, that's bad. But if we're right here, it will be quote, unquote, diminishing returns. But you haven't asymptoted or plateaued yet. And so all that matters is where you are on that curve and trajectory. And you can see based on some of the evals, it's not as if there's going to be a 10x improvement in intelligence anytime soon, simply because some of these evals were already at 80 or 90% of where the evals at. And so there isn't even room for the model to be 10x better. And so that might mean though, that you have to apply 5 or 10x more compute to get to that next final, that last mile of intelligence, which again would be both diminishing returns, but also something we would still continue to drive as an industry because you're still just going to get, you're going to appreciate that quality difference. And so that I think is totally fine. In general, talking to enterprises, we're already, for the most part, with many exceptions, we're already for the most part in a position where the technology, well, exceeds anybody's ability to adopt all of these benefits so far simultaneously. We want the progress to continue at this exact rate. And there's most use cases on the planet still could be benefited just by even what today's models can do. So we want more innovation, we want more compute, we want more intelligence. But even if you stopped right now, you'd still have massive amounts of economic gain get delivered from what we've already created.
Alex Kantrowitz
Right. But I guess the question really is for those, and I don't think you've said this, but there's many in the AI industry saying, well, the scaling laws are a straight shot to AGI if we keep making things bigger. So I guess I'm trying to test what we're seeing with grok, test that statement based off of what we're seeing with grok.
Aaron Levy
I'm not going to make any predictions on that front because the smartest.
Alex Kantrowitz
But do you think this is evidence for or against?
Aaron Levy
I think the smartest people on the planet have two totally different views. And so I'm not going to get in the middle of that One, I mean, clearly you have people like Ilya where it's rumored that he's working on a different architecture and maybe a different path. And then obviously you have other people that are. Let's just throw more compute and data at the problem. I think you can start to sense actually, as an industry, that the AGI term has actually kind of gone into the backseat. And obviously more of the conversation is around superintelligence. And I think there's more and more comfort around this idea that actually the race really is just, how do we build intelligence that far exceeds a human? And what will the economic and societal benefits be of just even accomplishing that, which are massive. I have always found the AGI thing to be particularly squishy as a concept. In the B2B world. I deal way more with just utilitarian concepts. And so superintelligence and this idea of we have AI that will far exceed a human like that, that alone is enough of a breakthrough to be shooting for. And I think what you're seeing with scaling is we will be able to certainly accomplish our collective definition of superintelligence with the current. The current path we're on with scaling laws.
Alex Kantrowitz
Okay, so you would say that there's two camps. One is keep scaling, and the other is we need new techniques.
Aaron Levy
Well, if you. If you have.
Alex Kantrowitz
Is that right?
Aaron Levy
Yeah, if you have. If you put Jan, Yann, Lecun, Ilya.
Alex Kantrowitz
And Demis would be in that category.
Aaron Levy
Of we need new techniques and Demis in one category. And then you put a bunch of sort of, you know, scale max, you know, maximalists. You know, who would that be? Dario, maybe Dario, Maybe anybody just run one of these current clusters. I don't know where Sam is these days, so, you know, probably Sam.
Alex Kantrowitz
Well, he said, we know what to do, we know what to do. And he's investing in Stargate, so seems.
Aaron Levy
Like he's a scale maxer, also scale maximalist. So. Yeah, but what's interesting is actually, I think that you'd be able to get them all to say the same thing, which is this category that says we need a new idea are probably AGI maximalists. And there's another category which is like, actually it's already proving out the economic and societal advantages of even our current approach to AI. So just let's keep running that for as long as possible, and we'll just keep eking out more and more benefit. Like, you could already dramatically improve every healthcare experience on the planet just by using whatever the latest state of the art model is. In every area of healthcare, everybody will absolutely get better doctor diagnosis, they'll get better health care. The doctors will be happier when they transcribe all of their conversations with patients with AI. And that's just today's state of the art. We don't need any new breakthrough just to have that ripple through everything that we do. If every engineer on the planet had background agents that were checking for bugs or writing new code for them or updating their libraries, all that long tail work that's really, really inefficient and not enjoyable already, the economic advantage of just today's architecture would be massive. So I think you can basically be happy about both outcomes. Like the superintelligence track with more scale is a great track to be on. And we're just going to get more and more benefits. And this sort of like we need a new idea AGI maximalist, that's fine too. And that's just upside. If and when we discover whatever that thing is.
Alex Kantrowitz
Okay, so I want to poke at this a little bit because we did see something this week that is concerning and really goes to the stability of these models, which is that Grok became, I don't know, a Neo Nazi. It seems like half the time these box become Neo Nazis, but none of.
Aaron Levy
The big ones, I don't know if it was Neo. I think it was, was like OG Nazi.
Alex Kantrowitz
Straight up Nazi. Yeah, yeah, OG Nazi. All right. I was giving it too much credit. So. So this from the BBC. Musk says Grok chatbot was manipulated into praising Hitler. Grok was too compliant to use her prompts, too eager to please to be manipulated. Essentially this is being addressed in response to a question asking which 20th century historical figure would be best suited to deal with. I think it was the Texas Floods. Grok said to deal with such vile anti white hate Adolf Hitler. No question. All right, so that definitely a Nazi full blast. Also someone who insulted President Recep Tayyip Erdogan of Turkey. And so he got the grac got blocked in Turkey. So just really off the reservation here messing with Erdogan. So I want to ask you. We got to. One of our listeners dropped a question and we're going to get some discord questions but basically asked me what does it say about the stability of these models that with a little tweak Grok turned into Mecha Hitler? That doesn't sound like a tight system or architecture. It sounds really wobbly.
Aaron Levy
That's a question for me. I mean unfortunately, I don't know if there's been a Full postmortem as to whether that was a training issue. All of a. It's in the weights to be Mecca Hitler, or if that was a system prompt issue, in which case you can do quite a bit with a system prompt to effectively change the direction or path of what you want the AI to respond to. To the extent that it was as simple as they used to have a system prompt that said, please be politically correct and be thoughtful and make sure to not say anything offensive if they used to have that, and then they basically said, actually no, you know, say anything you want. Then, you know, in that latter mode, users could certainly kind of, you know, cajole it into. Into then doing Mecca Hitler stuff. And Mecca Hitler stuff.
Alex Kantrowitz
What a term.
Aaron Levy
So I think, I think it's. It's sort of unknown how they train that model. How much of this was system prompt, you know, for. For being able to remove that as a risk factor. I think it's sort of well understood what you need to do post training and what you need to be doing from a safety standpoint. And then it's really just a decision of the model provider and the application layer of how to implement those things. But I thought it was obviously a ridiculously bad situation, deeply, obviously offensive and dangerous, but also not really that much of a meta story about AI, simply because you can get these models to do anything you want. And the whole, the whole thing is, as an industry, you're kind of working toward trying to keep these things confined within a particular pattern of behavior and sort of level of communication style.
Alex Kantrowitz
This is the next iteration. Grok Force from TechCrunch. Grok Force seems to consult Elon Musk to answer controversial questions. So they decided, I guess, to try the next version where if you ask a controversial question, let's say, about the Israel, Palestine conflict, abortion and immigration laws, Grok will reference Musk's stance on these subjects through news article written about the billionaire founder and the face of x. And TechCrunch tried to do this and was able to replicate it multiple times in its testing. Is this the answer to the alignment problem? Just follow what Elon believes.
Aaron Levy
It's an experiment of how to achieve it. Right. Listen, he's always claimed he's fairly centrist, so that would make it pretty aligned. Yeah, I mean, they clearly keep stepping on the rake and the rake keeps hitting themselves in the face, but I have faith that they will find a way to work through some of these kind of ridiculous situations. Right.
Alex Kantrowitz
And we've talked about where the money is in AI today. I mean, I'd say we both said majority coding, and then probably comes enterprise use cases. As this is all unfolding, we have the big Technology Discord server. One of our users says, oh, you're speaking with Aaron Levy this week. Why don't you ask him this question? This is the question, given what we just saw that Elon is willing to do with Grok, would you really, in your heart of hearts, consider this model for use at Box, or even extending it a little bit more? Why in their right mind would an enterprise consider integrating Grok given this pattern of behavior?
Aaron Levy
Well, I think it's a fantastic question, and it's absolutely worth thinking about. Do you remember, like, 10 years ago, Microsoft had an AI chatbot, I think, called Tay or something?
Alex Kantrowitz
Tay?
Aaron Levy
Yeah, Tay.
Alex Kantrowitz
So I remember it well because I had the exclusive. So Microsoft came to me to break that news at BuzzFeed, and I wrote, Microsoft has this fun chatbot called Tay. It will, you know, be your friend. I pinned it to my Twitter profile, went to sleep in San Francisco, woke up that morning, overnight, Europe and the east coast had figured out that Tay had been a Nazi. And I woke up to many concerned messages telling me, please take the pin down.
Aaron Levy
Okay, so I'm glad that I didn't know you caused this problem. So that's.
Alex Kantrowitz
Actually, I didn't cause it, but I might have inadvertently supported it. So, okay, I took the pin down eventually.
Aaron Levy
I think this space is always this process of figuring out where these models kind of go a bit crazy, produce either the wrong information or hallucinate or have accuracy issues. And it's all about continuing to iterate on how to improve the system, prompt the model, the alignment of these models. And so just judging by both how they responded, they took it down almost immediately as these examples were coming out, the fact that they acknowledged why this was occurring and what they're working on about it, I think that they will continue to improve their model and the AI system. And then it's really up to individual customers to decide, which model do you trust? What do you want to use? And I think everybody should take into all of the factors that they would want to consider. So we're certainly not in the business of telling our customers which type of AI model to use. There's going to be some that have really perfected a use case, and so thus you're going to want to use a particular AI model. But I think everybody has to make their own decision of which AI to use.
Alex Kantrowitz
Yeah, I guess their point was if you're an enterprise. I think this is one of the examples given. If you're an enterprise and you're using like Grok to write emails, you don't want it to like in the middle of responding to a sales request to be like, and by the way, you know who was great? Hitler. But you know.
Aaron Levy
My guess, I haven't seen. I haven't. I haven't read all of the. Again, I haven't read all of the. If they've done a postmortem or anything, my guess is that that's not built into the model as much as it was a. It was more of a GROK kind of specific application issue that caused that. But let's see how they respond.
Alex Kantrowitz
I just want to quickly agree with you here because Elon, we had talked about this actually on the Monday show with MG Siegler that Elon had repeatedly talked about how he lost control of Grok and it was citing media matters to try to take down cat turd, which we know is a capital punishment worthy crime in the Elon universe. He kept saying grok's getting a rewrite. So this is clearly a post training snafu. Or they took it from something that was politically correct. They wanted to make it less politically correct. And this is sort of where you get on the Internet when you want to go there.
Aaron Levy
I think to respond to that initial question from that person. I do think that anybody who wants to have an enterprise business does have to ensure that they are building basically purely utilitarian AI systems that are generally considered to be very safe and trustworthy. So if you want to be in the B2B game, which will be most of the volume of AI usage and APIs over time, because that's how you will show up in every other product, then this matters a ton. I just haven't seen evidence that they don't want to go fix those problems. But we'll see.
Alex Kantrowitz
Okay, yeah, I hear you. I think you're probably right here. All right, I want to go to break before we go to break. If you are on techmeme.com this weekend, you probably see that this podcast is showing up as the top podcast in the list of shows. It's reverse chronological, so we posted it I think most recently before the weekend. But it's a great placement and I want to thank techmeme for it. If you're not familiar with techmeme, it's read by tech industry leaders, executives, VC founders, key product people. It has info, dense headlines summarizing the news and enabling leaders to absorb what happened in tech as quickly as possible. I use it all the time for the show and it provides unique and valuable context, including related news, tweets, blue skies, threads when people are still threading. Highly recommend techmeme. Thank you, techmeme. It's really great to be partnering with them. All right, we're gonna go to a quick break and then we're gonna talk about Nvidia hitting $4 trillion. And we're back here on big technology podcast with Box CEO Aaron Levy. Aaron, the money for an industry that is majority enabling coding use cases keeps pouring in and we now have our first $4 trillion company. I think MG Siegler pointed out that the first trillion dollar company was Apple. The second, the first 2 trillion dollar company was Apple. The first 3 trillion dollar company was Apple. The 1st 4 trillion dollar company is Nvidia. So it just goes to show you, all those decades of working to sell computers and iPhones, now the GPUs are the hotness it's been. Yeah, this is from the times. Nvidia spent three decades building a business worth 1 trillion, 2 years turning itself into a $4 trillion company. Is it just another number or is there something significant about this?
Aaron Levy
I think it's fun. It's a fun milestone. There's obviously Nothing magical about 4 versus 3.999, so to some extent it's mostly symbolic. But I think it being the largest company in the world has an embedded message in it, which is just the point of leverage that Nvidia has relative to essentially what everybody is betting on is the future of the economy, which is an AI powered economy with robots and self driving cars and AI systems that we chat with and agents that do work for us. You know, you would expect that a meaningful portion of the profits of that economy will accrue to the infrastructure providers of that economy. And as you go through the stack, you've got the hyperscalers, you have the model providers, you have the, then the chip providers. And Nvidia is in the pole position, you know, on the chip front. So I think it's well deserved. Jensen's a beast. You know, he's, he's, you know, just worked obviously insanely hard for decades to get to this point, right? And it's always right place, right time, with many, many decades of building up to be able to be in that position. And so I think it's an important milestone for sure.
Alex Kantrowitz
Speaking of these big numbers though, I mean eventually they have to be tied to reality and it's not Just Nvidia, which is going to have to justify 4 trillion now. I mean, I guess the scale hypothesis, sorry, the scaling laws news is good for Nvidia. Maybe that's part of the reason why it's up today. But you also have Core Weave, which is at a forex jump in its share price after a pretty underwhelming ipo. And then you have Meta spending all this money on talent. Newcomer says this AI data center media conjures up the B word, which it means bubble. Is it something to fear? What do you think about this term bubble?
Aaron Levy
I think it's all well placed at the moment. So we're, you know, if you think about, let's just fast forward 20 years. One cool thing, actually maybe just small anecdote, one cool thing is Waymo just arrived in where I live in Silicon Valley in the Peninsula. And you know, everybody in downtown SF has already kind of had their religious moment on this, but we've never had it in the suburbs. And you just, you get into one of these things and it takes you to a meeting or it takes you to dinner and it's a completely life altering experience of just imagining in 20 years from now, what if every car you get into is autonomous? What if every factory you go to has, you know, is like 80% robots just running around? What if every computer you use is augmenting your work by a factor of 10 just to work on your behalf, to do way more? What if every time you, I mean, maybe people won't like this, but every time you have a sniffle, there's an AI, you know, doctor that's like doing diagnostics on you. Just like the future is going to be so many of these autonomous systems around us, helping us with education, healthcare, transportation, commerce, just basic productivity. And so if you think out 20 years and that's the world that we live in, who would you want to invest in other than that architecture and that infrastructure stack right now? And Nvidia would be at the center of that, but then many of these other players and platforms would, would, would, would obviously be in that investment case. So no, I don't, I don't think it's, I don't think it's crazy and I think it's 100% sort of directionally aligned with where the economy is going.
Alex Kantrowitz
I thought 3 trillion was surely the end of it, but we reached 4 so quickly and I'm just like, oh no, like is it going to hit 5 soon? At this point, there's nothing that's out of the Imagination for me.
Aaron Levy
I mean we probably need to start talking about. So like what will that really. Yeah, why not? Yeah, sure.
Alex Kantrowitz
That's true. Okay, how long? All right. Over under Nvidia 10 trillion by 2028.
Aaron Levy
Oh, maybe I'll give it a little bit more time but you know, to be worth $10 trillion, you probably want, you know, 300 billion in profit, let's say. And so with their margins, that means that, you know, they're doing 400 billion in revenue. Like that's totally not crazy. 400, 500 billion in revenue for Nvidia. That is a totally realistic scenario to imagine.
Alex Kantrowitz
Okay, this isn't investment advice, but I'm starting to scratch my head here. All right. So of course there's a company that they've replaced as like the one that's been setting these bars, which is Apple.
Aaron Levy
Yeah.
Alex Kantrowitz
This is a report in Bloomberg. Apple should consider replacing Tim Cook as CEO. LightShed says so the story says Apple should consider replacing Tim Cook as the iPhone maker. Struggles with artificial intelligence raise significant risks for the company. Apple needs a product focused CEO, not one centered on logistics. The two analysts said missing AI could fundamentally alter the company's long term trajectory and ability to grow a all grow at all. AI will reshape industries across the global economy and Apple risks becoming one of its casualties. You know, it's great setting this up right the, you know, could Nvidia hit $10 trillion? Because if AI is going to be as transformative as you suggested with all these various use cases, it is true Apple has been flat footed. Is this the craziest suggestion that the LightShed guys are making?
Aaron Levy
Well, I think the thing I would say so maybe a couple of things. First of all, I think, I think Tim's great and so I have a bias toward Tim for a number of reasons. But, but the thing that is worth noting is how strong Apple's position is in and what that then equates to is their ability to watch the space and figure out the right move to make and when to make it. Because whether some people like it or not, this is still the best handheld device on the planet and it has the best set of apps on the planet and it has your whole life kind of tied to it. So given they own that platform, their ability to lodge in AI into that at any point in the future remains very strong. And so I look at this as you have basically three options as a company. You could be a first mover and then totally sort of have a debacle and it not work and We've actually seen plenty of examples in AI where the first mover is no longer the relevant player. You could have a scenario where you are a first mover that has a compounding advantage that continues to persist. Let's say OpenAI is in that category. Incredible execution and absolutely amazing. And then you have another category which is you enter the space at a time when the architecture has sort of been figured out, when we understand the economics of the model, when you're not having to, you're able to have step function levels of improvement by the time that you launch into it. And I think maybe Apple didn't purposely make that choice, but they are clearly in the position where they can actually have that choice. Now I think you can just look at this as if this was 2004. We could have easily said why has Apple not released a phone? And yet by 2006 that wouldn't have mattered and they had the dominant platform that would continue to exist. I mean, Microsoft had a tablet computer in 2002 or something. I own one, or my co founder owned one. And I owned one of their Windows smartphones made by Compaq or hp. And so think about that, that they had the smartphone and they had the tablet computer first. And neither of those things mattered to the long term dominance in the space. And so Apple has a position and a potential of basically when the time is right to jump in, they still have the devices that we're using, they still have the OS that we're using, and they'll be able to have learned from all of the mistakes of various companies along the way. So I wouldn't count them out. And I think they are clearly sitting around saying when is the right time to pull a trigger on a much bigger move? And so I think we have to just wait for that.
Alex Kantrowitz
What do you mean much bigger move?
Aaron Levy
Well, they either have to make the decision of either train a model that gives them a state of the art AI model or do some substantial partnership or acquisition move. All of what we've seen with these kind of founders, CEO hires. Obviously the acquisition environment is complicated because of DOJ and ftc. But I would certainly be astonished if in two years from now there wasn't one of those choices being made. But I'm not that worried that it hasn't been made yet.
Alex Kantrowitz
All right, here is my Galaxy Galaxy Brain idea. It's one step further from the typical Galaxy Brain. So I've been on the show advocating for perplexity. Maybe I've been thinking too small. Let me put it this way. Apple just lost its COO this week, Jeff Williams. And everybody thought Jeff Williams was going to be the successor to Tim Cook. Are we now in a moment of setup where Sam Altman and Jony I've teamed up on a device. Tim Cook is getting ready to retire in the next couple years without a clear successor. Now that Williams is gone, do we see the ultimate tech merger where OpenAI becomes for profit and Tim Cook says, Sam, Johnny, pick up the legacy. They did the picture. I think they want this. Can it happen?
Aaron Levy
That is a wild. That is some wild fan fiction. Anything could happen. I think that is. That should be totally in the category of options. If you're being realistic. By the time that that moment would likely occur, OpenAI should be much bigger. That would be much more complicated than as a deal. But I like the. Certainly as a brainstorm. It's a great way to brainstorm.
Alex Kantrowitz
Okay, that's a very nice way to let me down. And yeah, I said merger for a reason. I wouldn't call it an acquisition. It might have to come at this point where the two just come together that way.
Aaron Levy
No, no. Fair point. And I've seen crazier things in my life now in tech, so I can't write anything out at this point. So let's see what happens.
Alex Kantrowitz
Let me put it this way as we end, I think that type of deal is far more likely than Apple buying anthropic just because it's going to.
Aaron Levy
Require something so much more substantial or why would that be more likely?
Alex Kantrowitz
Because I think it's a better cultural fit. I think the anthropic team and Apple would clash. But I think OpenAI going into Apple, you know, could potentially work, although OpenAI is much leakier than Apple, although Apple leaves leaks everything to Gurman these days.
Aaron Levy
You know, the only, the only thing I would just suggest or posit is, you know, this. It'll be fascinating to watch what Meta does with obviously its new superintelligence. Org, because we actually already saw it with Grok, to be clear. But Meta will be a second round of this if from a more or less standing start, you know, they're able to accomplish, let's say, some new breakthrough state of the art model in 6 months, 12 months, 18 months or whatnot. I think what that will prove is basically it still remains largely a talent and compute and data game, which means that you don't really need to buy an existing incumbent. You mostly just need to decide to go big on the compute and on the training and obviously have the Right talent to do that and the day that it doesn't really matter whether you had all of the other prior versions before that moment, you're doing a reset no matter what. So I would just argue that we get all excited about this idea of some big mega acquisition, but it's not a problem that requires that kind of scale, except for when you're just doing the capital expenditure of the GPUs. You really just need the right talent, the right training data and the right compute. So I would more bet not on one of these very large multi tens of billions of dollar deals simply because there's other paths to get there that are not as complicated.
Alex Kantrowitz
That's a great point. I mean it's less about an individual company's IP because everyone's effectively sharing the ip. It's about productizing it, right?
Aaron Levy
Well, that's exactly right. So if you imagine this industry within one year, every single breakthrough idea eventually gets discovered by everybody else. Nobody has kept an advantage for more than a year on some secret idea that only they have. And so Apple's ultimate advantage is they have a distribution model that nobody else has and they have a form factor of where AI could show up that nobody has. So they don't necessarily need to have the best model relative to one or two months being ahead of anybody else. They just need to have a good enough model that any one of our non tech friends would just be like, this is fantastic. I love this thing. Which is just not again does not require that scale of acquisition or whatnot.
Alex Kantrowitz
All right everybody, the website is box.com. you could also find Aaron's very insightful posts about AI on LinkedIn. Aaron Levy. And on X his handle is Levy. Aaron, this was so fun. It's always great to speak with you. I appreciate the time. Thank you.
Aaron Levy
Good to see you man. Take care.
Alex Kantrowitz
You too. Thank you everybody for listening and we'll see you next time on Big Technology Podcast.
Big Technology Podcast: Episode Summary
Title: OpenAI’s Windsurf Crash, Grok’s Wild Week, Replace Tim Cook? — With Aaron Levie
Host: Alex Kantrowitz
Guest: Aaron Levie (CEO, Box)
Release Date: July 12, 2025
In this episode of the Big Technology Podcast, host Alex Kantrowitz welcomes Aaron Levie, CEO of Box, to discuss a series of significant developments in the tech world. The conversation delves into the fallout of OpenAI’s failed acquisition of Windsurf, the tumultuous week experienced by Elon Musk’s Grok, Nvidia’s monumental rise to a $4 trillion valuation, and the provocative speculation surrounding Apple potentially replacing its long-time CEO, Tim Cook.
[00:55 – 05:12]
Alex Kantrowitz opens the discussion by highlighting the abrupt cancellation of OpenAI's deal to acquire Windsurf. Instead, Google has stepped in to hire Windsurf’s CEO Varun Mohan, co-founder Douglas Chen, and several R&D employees to join DeepMind.
Key Points:
Notable Quote:
“Windsurf has been one of the faster growing AI coding platforms... Google now has a way to accelerate their position in AI coding.”
— Aaron Levy [02:15]
[05:54 – 06:12]
Alex Kantrowitz probes into the allocation of AI spending, questioning whether a majority is directed towards coding applications.
Aaron Levy responds affirmatively, asserting that coding stands as the pinnacle use case in generative AI due to the high volume of tokens it generates, which translates to substantial GPU expenses.
Notable Quote:
“Coding is definitely the one that is this incredible. One person could literally be causing tens of thousands of dollars a month in AI consumption because of just the single actions that he is doing.”
— Aaron Levy [06:05]
[11:30 – 24:34]
The conversation shifts to Grok, Elon Musk’s AI project, focusing on its latest iteration, Grok 4, and recent controversies surrounding its alignment and behavior.
Key Points:
Notable Quotes:
“How committed they are to solving this problem. And Varun now and team are going to be now one more way to accelerate that.”
— Aaron Levy [08:24]
“If you're on the B2B game... you have to ensure that you are building basically purely utilitarian AI systems that are generally considered to be very safe and trustworthy.”
— Aaron Levy [27:58]
[31:46 – 35:39]
Alex Kantrowitz introduces the topic of Nvidia reaching a $4 trillion valuation, marking it as the first company to achieve this milestone.
Key Points:
Notable Quote:
“...the point of leverage that Nvidia has relative to essentially what everybody is betting on is the future of the economy, which is an AI powered economy...”
— Aaron Levy [32:58]
[36:40 – 45:57]
The episode concludes with a provocative discussion on whether Apple should consider replacing Tim Cook amidst challenges in the AI landscape.
Key Points:
Notable Quotes:
“Apple has the devices that we're using, they still have the OS that we're using, and they'll be able to have learned from all of the mistakes of various companies along the way.”
— Aaron Levy [37:05]
“Your entire life kind of tied to it... they'll be able to have learned from all of the mistakes of various companies along the way.”
— Aaron Levy [39:29]
[41:04 – 43:16]
In a lighter segment, Alex ventures into a speculative scenario contemplating a possible merger between OpenAI and Apple’s leadership, to which Aaron responds with cautious optimism, emphasizing the unpredictable nature of tech industry developments.
Notable Quote:
“I like the. Certainly as a brainstorm. It's a great way to brainstorm.”
— Aaron Levy [42:27]
The episode provides a comprehensive overview of the latest shifts in the AI and tech industries, featuring insightful analysis from Aaron Levie. Key themes include the strategic maneuvers of major AI players like OpenAI and Google, the critical role of Nvidia in the burgeoning AI economy, and the ongoing debates surrounding AI model scaling and alignment. The discussion also touches on leadership challenges faced by tech giants like Apple, offering listeners a nuanced perspective on the dynamic landscape of technology.
Closing Remarks:
Alex Kantrowitz wraps up by directing listeners to Box’s website and Aaron Levy’s professional profiles for further insights, thanking Aaron for his participation and encouraging the audience to stay tuned for future episodes.
Aaron Levy on Windsurf and Google:
“Windsurf has been one of the faster growing AI coding platforms... Google now has a way to accelerate their position in AI coding.”
[02:15]
Aaron Levy on Coding as a Killer App:
“Coding is definitely the one that is this incredible. One person could literally be causing tens of thousands of dollars a month in AI consumption because of just the single actions that he is doing.”
[06:05]
Aaron Levy on Superintelligence vs. AGI:
“Superintelligence and this idea of we have AI that will far exceed a human like that, that alone is enough of a breakthrough to be shooting for.”
[17:48]
Aaron Levy on Nvidia’s Position:
“Jensen's a beast... decades of building up to be able to be in that position.”
[32:58]
Aaron Levy on Apple’s AI Strategy:
“Apple has the devices that we're using, they still have the OS that we're using, and they'll be able to have learned from all of the mistakes of various companies along the way.”
[37:05]
For more detailed discussions and expert insights, visit Big Technology Podcast or follow Aaron Levy on LinkedIn and X.