Transcript
A (0:01)
The world moves fast, your workday even faster. Pitching products, drafting reports, analyzing data. Microsoft 365 Copilot is your AI assistant for work built into Word, Excel, PowerPoint and other Microsoft 365 apps you use, helping you quickly write, analyze, create and summarize so you can cut through clutter and clear a path to your best work. Learn more@Microsoft.com M365 copilot. Welcome to the Tech Brew Ride home for Monday, February 2, 2026. I'm Brian McCullough today is Nvidia going through with their massive OpenAI investment or not? Does Apple have what it takes to win in the AI era or not? Is Elon the only one who can make data centers in space happen or not? And a roundup of the continued fascination with Multbook? Here's what you missed today in the world of tech, Attackers don't need exploits when they use your allowed tools against you. That's why ThreatLocker enforces default deny at execution, stopping unknown software scripts and ransomware the moment it tries to run. No signatures, no guesswork, just control. Threatlocker takes zero trust from theory to practice. By blocking any unauthorized application or behavior from ever running in the first place, generative AI has lowered the barrier to malware creation. So ThreatLocker prevents AI generated polymorphic and fileless attacks by shutting down unknown behavior automatically, even if it's never been seen in the wild. Threat Locker gives you tight control without the noise, meaning fewer alerts and a cleaner, predictable operational posture. Learn more@threatlocker.com TechBrewRideHome that's threatlocker.com TechBrewrideHome so a little bit of drama over the weekend. Nvidia, you might recall, has plans to invest up to $100 billion in OpenAI. You might rec that was part of that sort of big announcement that people said at the time was sort of circular investing, a bit of paying Peter to pay Paul to pay Peter sort of deal. But sources say that investment might have stalled after some inside Nvidia are expressing doubts about the deal. Quoting the Journal, Jensen Huang has privately emphasized to industry associates in recent months that the original $100 billion agreement with OpenAI was non binding and not finalized, people familiar with the matter said. He has also privately crippled criticized what he has described as a lack of discipline in OpenAI's business approach, and expressed concern about the competition OpenAI faces from the likes of Google and Anthropic, some of the people said our teams are actively working through details of our partnership. Nvidia Technology has underpinned our breakthroughs from the start, powers our systems today and will remain central as we scale what comes next, an OpenAI spokesman said. In a November filing, Nvidia said there was no assurance that it would enter into definitive agreements with respect to the OpenAI opportunity or other potential investments, or that any investment will be completed on expected terms, if at all. At a UBS conference in Scottsdale, Arizona, Nvidia Chief Financial Officer Colette Kress said the company hadn't completed a definitive agreement with OpenAI. Huang has indicated to associates that he still believes it is crucially important to provide OpenAI with financial support in one form or another, in part because OpenAI is one of the chip designer's largest customers. People familiar with the matter said if OpenAI were to fall behind other AI developers, it could dent Nvidia's sales. Well, that original reporting definitely got the attention of various PR and comms teams, I'm sure, because Jensen Huang now says Nvidia's OpenAI investment will be the largest investment we've ever made, quoting Bloomberg. We will invest a great deal of money, huang told reporters while visiting Taipei on Saturday. I believe in OpenAI. The work that they do is incredible. They're one of the most consequential companies of our time. Huang didn't say exactly how much the company might contribute, but described the investment as huge. Let Sam announce how much he's going to raise, it's for him to decide, huang said, adding that Altman is in the process of closing the round, but we will definitely participate in the next round of financing because it's such a good investment. When asked by a reporter in Taipei about the report that seemed to suggest he wasn't very happy with OpenAI, Jensen said that's nonsense. Huang said Nvidia's contribution to OpenAI's latest funding round wouldn't approach $100 billion, though OpenAI has been seeking to raise as much as $100 billion in its current funding round, according to a person familiar with knowledge of the matter, asking not to be identified because the discussions are private. Amazon was in talks to invest as much as $50 billion in the fundraise and expand an agreement that involves selling computer power to the AI startup, the person said on Thursday. Altman has also met with top investors in the Middle east to line up funding for the round, which may value the company at about $750 $30 billion dollars, people familiar with the matter said earlier in January, while asking not to be identified because the information isn't public. Microsoft is in discussions to participate as well. The information had previously reported end quote. Mark Gurman says that his sources tell him that Apple executives are increasingly beginning to question if Apple has the ingredients to win in the AI era. Some have argued that Apple doesn't need AI, noting that it never owned the Internet or ran its own search engine. But that misses the point. Apple's past 25 years were built on Internet technology that sat at the heart of breakthrough products including the iPhone, imac, ipod, iPad, itunes, the App Store and iOS. These are offerings that only exist because of the Web. But Chief Executive Officer Tim Cook has yet to articulate a bold AI vision, and his hiring of Google veteran John Giannandrea to run artificial intelligence in 2018 now looks like the biggest mistake of his tenure. Giannandrea stepped down as AI chief in December, but he'd already been sidelined for much of last year. Software chief Craig Federighi took over, securing a short term fix via a partnership with Google's Gemini to deliver working AI models. Hardware alone won't save Apple. Consumers don't buy its products for the components, they buy them for the experience, including the integration of sleek designs, software and services. Right now, AI is missing from that equation. For Apple to sustain its growth and relevance, it must execute a company wide AI reckoning that changes its approach to product development. Even if Apple continues to thrive in the smartphone market, it could still lose its standing in a fast changing tech world. The company's own senior executives understand this and privately question whether Apple has the right ingredients to win in the AI first landscape. There is no miracle product that will guarantee Apple's success here. It's no longer working on a self driving car and there isn't an obvious new category that can generate iPhone scale revenue. At least not yet. That's why Apple is betting on a patchwork approach, AI enhanced services, a range of wearable and home devices, and a more personalized and conversational Siri assistant. For the strategy to work, Apple must build durable and proprietary AI in house, powered by servers with higher end versions of its own custom chips. Relying on Google's Gemini cannot be the long term answer, no matter how Apple frames the arrangement as a collaboration. Relying on a chief rival to paper over a core weakness is not a strategy, it's a stopgap measure. The situation echoes Apple's 1997 dependence on Microsoft. Even if the optics are different, hiring and retaining elite AI talent will be critical. So will humility. Apple can no longer assume that superior hardware execution alone will protect it from AI focused competitors. The company needs more than a holiday season sales bump. It needs a path to leadership in the next era of computing. So I want to point something out here. Given the unique position Gurman has in the Apple rumor ecosystem, I honestly wonder if him writing that might be some strategic leaking from folks inside of Apple. That is, I'm wondering if the call for better AI leadership is coming from inside the house of Cupertino. According to Reuters, SpaceX is seeking US FCC approval to launch 1 million satellites SpaceX claims that they will orbit the Earth and harness the power of the sun to power AI data centers Data centers are the physical backbone of artificial intelligence, requiring massive amounts of power by directly harnessing near constant solar power with little operating or maintenance costs, these satellites will achieve transform and energy efficiency while significantly reducing the environmental impact associated with terrestrial data centers. The FCC filing said Elon Musk would need the telecom regulator's approval to move forward. While it is unlikely SpaceX will put 1 million satellites in space, where only 15,000 satellites exist currently, satellite operators sometimes request approval for higher numbers of satellites than they intend to deploy to buy design flexibility. SpaceX sought approval for 42,000 Starlink satellites before began deployment of the system. The growing network currently has roughly 9,500 satellites in space. SpaceX's request bets heavily on reduced costs of Starship, the company's next generation reusable rocket under development. Fortunately, the development of fully reusable launch vehicles like Starship can deploy millions of tons of mass per year to orbit when launching at rate meaning on orbit processing capacity can reach unprecedented scale and speed compared to terrestrial buildouts with significantly reduced environmental impact. Space Starship has test launched 11 times since 2023. Musk expects the rocket, which is crucial for expanding Starlink with more powerful satellites, to put its first payloads into orbit this year. End quote. So far be it from me to question Elon Musk if he can achieve the business or even engineering impossible because you know, things like launching the first new successful car manufacturer in multiple generations, or creating a company that can reuse rockets and create the most valuable private company ever. Only Elon Musk. But given again the engineering challenges we've talked about vis a vis this whole concept of data centers in space. Is this insane or will Elon make us all look like idiots in about 10 years? Managing your cap table shouldn't drain your time or derail your budget, and yet somehow it can manage to do both. Pulley knows there's a better way that's why they help take the complexity and surprises out of equity management. Pulley's intuitive workflows, built in compliance tools and decision ready reporting are designed to work for you, not against you. Pulley helps you issue, track and manage equity, stay compliant with up to date 409A valuations, complete stock based compensation reporting and more. All without the expensive legal fees or endless manual work. Learn more and get started@pulley.com brew that's pulley.com. If you've ever wanted to be a fly on the wall for the conversations world class CEOs have behind closed doors, then you may want to listen to the new podcast Long Strange CEO to CEO. In each episode, Brian Halligan, co founder of HubSpot, speaks with leaders to unpack the real stories behind scaling their companies from the emotional toll of leadership to the tactical decisions that shape a company's future. Expect candid conversations about hiring, culture, communication strategy and more. Whether you're an aspiring founder, a seasoned CEO, or simply curious about the stories behind the CEOs on the long Strange trip of building enduring legendary companies, this is a show you won't want to miss. Long Strange Trip is available everywhere you get your podcasts. That's Long Strange Trip Podcast Anthropic continues to have things both ways In a way, A new paper co authored by researchers at Anthropic and the University of Toronto quantifies how frequently AI chatbots produce interactions that could disempower users, I.e. shift their beliefs, values or actions in ways that ultimately undermine their autonomy instead of helping them. The study was titled who's in Disempowerment Patterns in Real World LLM Usage, and it analyzed nearly one and a half million real clawed chatbot conversations with an automated classification system called clio, seeking to identify patterns where users ended up worse off after an AI exchange. The researchers categorized disempowerment into types such as reality distortion, convincing users of false narratives, belief distortion, changing users values or judgments, and action distortion, encouraging actions misaligned with a user's intents or interests. Severe risks were uncommon at the individual. At the individual level, for example, reality distortion appeared roughly once every 1300 conversations and action distortion once every 6000. But when considering mild forms of disempowerment, the rates jumped to about one in 50 to 70 chats. Underscoring that subtle influences occur far more often than extreme cases, Anthropic's team also identified several amplifying factors that make users more susceptible to influence from a chatbot, such as being in a personal crisis, having formed a close emotional attachment to the bot, relying on AI for daily tasks, or treat the AI as an unquestioned authority. For example, vulnerability due to life disruption showed up in approximately one out of every 300 conversations. Crucially, the paper stresses, these findings don't prove that chatbots caused harm, merely that they have the potential to steer users in harmful directions. The authors note that their automated approach measures disempowerment potential rather than confirmed outcomes, calling for future research involving human centered studies to better assess real world impacts. Finally, today, Moltbot continues to fascinate. Here are a couple of takes First, Simon Willison I've not been brave enough to install claudebot Moltbot openclaw for myself yet. I first wrote about the risks of a rogue digital agent back in April 2023, and while the latest generation of models are better at identifying and refusing malicious instructions, they are a very long way away from being guaranteed safe. The amount of value people are unlocking right now by throwing caution to the wind is hard to ignore though. Here's claudebot buying AJ Stuvenberger a car by negotiating with multiple dealers over email. Here's claudebot understanding a voice message by converting the audio into a wav with FFMPEG file and then finding an OpenAI API key and using that with curl to transcribe the audio with the Whisper API. People are buying dedicated Mac Minis just to run openclaw under the rationale that at least it can't destroy their main computer if something goes wrong. They're still hooking it up to their private emails and data though, so the lethal trifecta is very much still in play. The billion dollar question right now is whether we can figure out how to build a safe version of the system. The demand is very clearly here, and the normalization of deviance dictates that people will keep taking bigger and bigger risks until something terrible happens. The most promising direction I've seen this remains the camel proposal from DeepMind. But that's 10 months old now, and I still haven't seen a convincing implementation of the patterns it describes. The demand is real. People have seen what an unrestricted personal digital assistant can do, end quote. And here's Andrej Karpathy quote I'm being accused of overhyping the site. Everyone has heard too much about today already. People's reactions varied widely from how is this interesting at all? All the way to it's so over. To add a few words beyond just mem in jest. Obviously when you take a look at the activity, it's A lot of garbage spams scams slop the crypto people highly concerning privacy and security, prompt injection attacks in the wild and a lot of it is explicitly prompted and fake posts comments designed to convert attention into ad revenue sharing. And this is clearly not the first the LLMs were put in a loop to talk to each other. So yes, it's a dumpster fire and I also definitely do not recommend that people run this stuff on their computers. I ran mine in an isolated computing environment and even I then was scared. It's way too much of a wild west and you are putting your computer and private data at high risk. That said, we have never seen this many LLM agents, 150,000 at the moment, wired up via a global persistent agent first scratchpad. Each of these agents is fairly individually quite capable now and they have their own unique context, data, knowledge, tools, instructions and all. The network of all that is at this scale is simply unprecedented. This brings me again to a tweet from a few days ago. The majority of the ruff ruff is people who look at the current point and people who look at the current slope. Which IMO again gets to the heart of the variance. Yes, clearly it's a dumpster fire right now, but it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into millions with increasing capability and increasing proliferation. The second order of effects of agent networks that share scratch pads are very difficult to anticipate. I don't really know that we are getting a coordinated skynet though it clearly type checks as early stages of a lot of AI sci fi takeoff and the toddler version I guess. But certainly what we are getting is a complete mess of a computer security nightmare at scale. We may also see all kinds of weird activity, viruses of text that spread across agents, a lot more gain of functions on jailbreaks, weirder attractor states, highly correlated botnet like activ delusions, psychosis, both agent and human, et cetera. It's very hard to tell because the experiment is running live. Tl Dr. Sure, maybe I am overhyping what you see today, but I am not overhyping large networks of autonomous LLM agents in principle that I'm pretty sure of, but at the same time it's worth noting that another researcher came out and said an exposed Multbook database was out there and that could have let anyone take control of the site's AI agents and post anything that database has since been secured, apparently. But still. Hello from London. One time, when I have time, remind me to tell you about how today I came the closest I've come in eight years to not being able to put up a show because icloud. I'll tell you about it some other time. Talk to you tomorrow.
