Transcript
A (0:01)
If you're the purchasing manager at a manufacturing plant, you know having a trusted partner makes all the difference. That's why, hands down, you count on Grainger for auto reordering. With on time restocks, your team will have the cut resistant gloves they need at the start of their shift and you can end your day knowing they've got safety well in hand. Call 1-800-GRAINGER Click grainger.com or just stop by Grainger for the ones who get it done.
B (0:30)
If you're an H Vac technician and a call comes in, Grainger knows that you need a partner that helps you find the right product fast and hassle free. And you know that when the first problem of the day is a clanking blower motor, there's no need to break a sweat. With Grainger's easy to use website and product details, you're confident you'll soon have everything humming right along. Call 1-800-GRAINGER clickgrainger.com or just stop by Granger for the ones who get it done.
C (1:00)
Welcome to the podcast. I'm your host Jayden Schafer. Today we're talking about some big news out of Meta. Zuckerberg says that Meta is launching their own AI infrastructure initiative. This is interesting. We've seen the same things happen from people like OpenAI, Microsoft, Google and everyone seems to be rushing to build not just, you know, the latest model, but the infrastructure that actually powers it because they know that this is kind of an area where it's hard to hotly contested. There's a lot of competition and there just isn't enough infrastructure for everyone. So it kind of goes to the highest bidder unless you build your own. So I think when Meta first laid out all of its capex expenditure projects last year, it was pretty clear what they were doing. They said that they were going to prepare to spend super aggressively and they were going to secure the infrastructure needed to compete in the highest levels of artificial intelligence. And that is not cheap. So today on the podcast we're going to be breaking down everything they are planning with this. But before we get into that, I wanted to say if you want to test all of the latest models, including every model from OpenAI, Claude, Meta, Grok, a bunch of cool image generation models and use them to create tools. Go check out AI Box AI my own startup. I will leave a link to that in the description. All right, let's talk about what's going on with Meta. Here's a direct quote from well, I guess as a little background during an earnings call Last summer, Meta CFO Susan Lee, she kind of framed this strategy in really stark terms. She was arguing essentially that the infrastructure itself was going to become like basically your competitive moat in the AI race. She said, quote, we expect that developing leading AI infrastructure will be a core advantage in developing the best AI models and product experience. So it's interesting. I feel like Meta isn't just looking at their compute capacity as, you know, like, oh, this is something that's going to support the company, but really this is like their most strategic weapon they have. I think right now Meta is moving really quickly from intention to execution. So like this is their goal, but they're actually executing on this really quickly. On Monday, Zuckerberg announced the launch of Meta Compute. So this is a new company wide initiative that is aimed at dramatically scaling Meta's AI infrastructure footprint. And they have this whole announcement that came out, but I think it's all basically centered on compute energy and long term capacity planning, which is what Meta is, is going towards. So there's a couple different components in it. There was a post over on Threads, Zuckerberg's version of X, where he said that they're going to expand their energy usage at a scale that has rarely been seen outside of utilities or nation states infrastructure projects. So I mean Meta is planning on some massive increases in their energy usage. They said that they. Zuckerberg said, quote, meta is planning to build tens of gigawatts this decade and hundreds of gigawatts or more over time. How we engineer, invest and partner to build this infrastructure will become a strategic advantage. I think to put this into perspective, like what this means, a single gigawatt represents 1 billion watts of power, which is roughly enough to supply electricity to hundreds of thousands of homes. So 10 gigawatts would place Metta among basically the biggest energy consumers in the entire world. They're like basically going to be consuming as much energy as an entire metropolitan region. A lot of analysts have increasingly warned that the rise of these huge, you know, large scale AI training and inference could essentially reshape national energy demand curves, which is just absolutely insane. A lot of people are estimating that the US AI related power consumption could grow about 10x over the next decade. So it could climb from about 5 gigawatts today, that's what we're currently consuming to, to over 50 gigawatts or more as data centers are just becoming more popular and they're rolling out from all of these AI companies. So I think this huge push really highlights a huge factor like Basically a reality for the AI sector and that is that as these models are getting bigger and the more capable, the access to reliable, low cost compute and energy is becoming really critical. It's basically just as critical as their, you know, the algorithms and the LLMs and the innovation that we're getting from these AI models. Just as critical is where we're going to get the electricity from. In today's environment, Companies can plan 10, like years or decades ahead for capacity. And this is really tricky, right? Like how much energy, how much capacity are they going to need in 1 year, 2 years, 3 years, 5 years, 10 years? They have to start making these estimates because as things grow, they don't want to be, you know, caught without having the infrastructure that they need to be able to scale. So Zuckerberg right now, he outlined basically a leadership structure for Meta Compute. He went so far as to name the three executives who are going to be overseeing this new initiative. One of them is Santosh Jarnhan, that's Meta's head of global infrastructure. He's been at Meta since 2009 and he's essentially going to lead the efforts across the technical architecture and the software stack, the internal silicon programs, the developer productivity and the construction and operations of meta's global data center fleet and network. So he is going to be overseeing a lot of. Another really key figure in all of this is Daniel Gross who joined Meta last year after co founding SAFE superintelligence with former OpenAI chief scientist Ilya Suskeber. And Mark Zuckerberg said that Gross is going to head a new group focused on long term capacity strategy and supplier relations and industry analysis, planning and business modeling. I think this role basically is showing that Meta is kind of treating compute procurement and forecasting as a core strategic discipline. Right. This isn't just one of these like back office functions. It's a really strategic thing. The third executive that they named is Dina Powell McCormick. This is a former senior government official who recently joined Meta as the President and Vice Chairman. According to Zuckerberg, Powell is going to work closely with governments and public institutions to help build, deploy, invest in and finance meta's infrastructure. I think basically that goal there is really reflecting that the, this is a lot of this is about is very political by nature. A lot of this AI infrastructure where permitting, with energy access, with public private partnerships, all of this. If you're on the wrong side of kind of like the, the political stick here, you're going to get into a, you know, very tricky situation because cities have to approve permits they have to approve your power consumption. We've seen all sorts of lawsuits and difficult situations from other AI firms, whether that be OpenAI or XAI, all of them have definitely seen some struggles here. So I think Meta's announcement is coming with a lot of like, definitely a very intensifying race of the top big tech firms to try to get some of this AI ready infrastructure. The capital expenditure plans disclosed over the past year show that the most of Meta's largest competitors are trying to get. Basically they're like, everyone's basically in a very similar place. They're very aggressive with these strategies. Microsoft has leaned heavily on a lot of their partnerships with specialized AI infrastructure providers to try to expand their capability, their capacity. And then Alphabet has moved to kind of vertically integrate further. In December they had an acquisition of a data center firm, firm intersect. So we see, you know, whether you're acquiring companies, whether you're vertically integrating, whether you're trying to build partnerships, every company, all of these top companies are really spending a lot and investing a lot into this. I think if you look at this as a whole, a lot of these moves signal a really big shift in how the industry thinks about AI competition. I think the next phase is going to be decided less by who has the best models today and maybe more by who can sustainably power, cool and scale the machines that are required to build the models that are going to be coming in the future. And we've seen, you know, comments from people like Sam Altman saying he's concerned that Elon Musk is going to get access to more compute for XAI than OpenAI. And if that happens, it puts the, you know, company in a really tricky situation. I think that this is something that it's not just, it's obviously not just Meta that is, you know, focusing on this. If you don't have the compute, you're going to struggle. We see OpenAI and basically everyone working on this. So because of this, beyond just the compute, beyond just the data centers, we see a lot of partnerships with energy. And Zuckerberg talked a lot about energy in kind of the need. You know, there's a lot of estimates, like I mentioned earlier, where we might need to 10x from 5 gigawatts to 50 gigawatts or energy consumption. So we're seeing like there was an interesting move a while back where Microsoft made a deal with the nuclear, with the company that has the nuclear reactor on, on in New York that essentially was completely put offline. Like this nuclear reactor was no longer operating. And Microsoft said, We'll pay $1 billion if you can retrofit and get this nuclear reactor up and running again and give a license, exclusive license to run and to take all of its power, basically. So we're seeing a lot of these really interesting deals. People are being creative, like I think that's a very creative deal. But we need more compute and we need more energy to power the compute and we are going to need data centers to host the compute. So there is a lot of industries that are going to come out of this. A lot of money is being spent and I think, you know, a lot of people tend to think, well, this is just money being spent by big companies for big companies. But I think this has a lot of knock on effects and a lot of people are benefited from this that, that are in construction and a lot of other industries. So overall, I do think this is going to be a net positive to build more energy capacity. I think if we don't build more energy capacity and these AI companies just suck up the energy and electricity becomes more expensive, I think that's definitely a net negative. Well, maybe not a net negative, but it's definitely has negative implications especially on people that have to pay more for electricity. So I'm excited to see some of these, some of these plans where it looks like they're actually planning on building out all, all parts of the infrastructure, including the energy production. And so we'll be excited to see what Meta does in that regard. Thank you so much for tuning into the podcast today. I really appreciate it. If you want to check out AI box dot AI, there's a link in the description and you can go and build AI tools even if you do not know how to code. Thanks so much for tuning in. I'll catch you in the next episode.
