Transcript
A (0:01)
The Uniswap wallet makes crypto easier and safer to own and use. Discover new tokens, research confidently, swap instantly, and manage it all securely in one place. The Uniswap trading protocol has powered over $3 trillion in volume, and it's trusted by millions worldwide. Buy your first crypto assets in a few taps and experience the freedom of decentralized finance with Uniswap. Tap the banner to get started.
B (0:32)
Welcome to the Tech Brew Ride home for Tuesday, December 2, 2025. I'm Brian McCullough. Today Sam Altman declares a code red for OpenAI. All hands on deck. Molly, you in danger girl. To quote Oda Mae Brown from the movie Ghost, Long term, is Apple in trouble because of Google's ascendancy and AI? Samsung announces but does not launch a tri fold phone and Ben Thompson weighs in on the obsession of the day. Here's what you missed today in the world of tech, Foreign. Business online can feel a little scary these days, especially with AI creating new opportunities for fraud. In fact, salent estimates that AI was behind roughly 20% of the fraud perpetrated in 2024. Spotting bad agentic AI, while allowing good agents to continue with their tasks isn't easy. Thankfully, Mimoto Continuous Captcha can spot malicious agents pretending to be people at the point of account creation or registration. Unlike past Captcha solutions, it runs behind the scenes with no puzzles for users. Momoto is offering techbrew Riot home listeners early access with a special price for Momoto Continuous Captcha. Right now, our listeners can purchase a year of Momoto continuous captcha for $5,000, a 20% discount on their lowest price plan. To learn more, head to Mimoto AI Ridehome. That's Mimoto AI Ridehome. Sam Altman has declared a code Red to shift more resources to improve ChatGPT amid rising competition that we've been discussing. Also delaying OpenAI's other plans, like introducing ads, quoting the information. We are at a critical time for ChatGPT, he said in an internal memo. OpenAI hasn't publicly acknowledged it is working on selling ads, but it is testing different types of ads, including those related to online shopping, according to a person with knowledge of its plans. Millions of people already used ChatGPT to search for products to buy. Altman said the code red surge to improve ChatGPT meant OpenAI would also delay progress with other products, such as AI agents, which aim to automate tasks related to shopping and health, and Pulse, which generates personalized reports for ChatGPT users to read each morning. He didn't specify what was going wrong with ChatGPT, but Google said this fall that its Gemini chatbot had gained ground in terms of usage. Altman recently warned employees privately that Google's AI resurgence could cause temporary economic headwinds for OpenAI. In a call with OpenAI investors last month, CFO Sarah Fryer alluded to a slowdown in ChatGPT growth, though it wasn't clear what growth metric she was referring to. According to a person with knowledge of her remarks, OpenAI's code red represents a role reversal from three years ago, when Google began its own code red to respond to the threat ChatGPT posed to Google Search. Google later launched its Gemini chatbot, which still lags OpenAI in terms of user numbers, but there are signs it may be catching up. Google said in October that Gemini has 650 million monthly active users, up from 450 million monthly active users in July, though it's still a far cry from the user figures OpenAI has disclosed for ChatGPT. You know, just a personal anecdote here. Take it or leave it. I've been getting tons of we're busy, try again later notifications. When I attempt to do deep research on Gemini the last couple of days maybe indicative of the surge in usage they're seeing. I kind of expect Google to come out soon with some sort of statement about usage in order to keep the pressure on OpenAI if indeed they are seeing a surge. But at the same time, I wouldn't be surprised at all if OpenAI announced something completely unexpected by the end of the year in an attempt to kind of seize the initiative back. They kind of have to. Apple says AI chief John Giannandrea is stepping down and will retire in the spring of 2026. Ex Microsoft CVP Amar Subramania is taking over, reporting to Craig Federighi, quoting MacRumors. Subramania was previously corporate vice president of AI at Microsoft, and before that he spent 16 years at Google. He was head of engineering for Google's Gemini Assistant, and Apple says that he has deep expertise in both AI and ML research that will be important. Quote Apple's ongoing innovation and future Apple intelligence features. Apple CEO Tim Cook thanked Giann Andrea for his role advancing Apple's AI work and said that he looks forward to working with Subramania. He also said that Federighi has played an important role in Apple's AI efforts. Apple said that it is, quote, poised to accelerate its work in delivering intelligent, trusted and profoundly personal experiences with the new AI team. End quote. Another aside, here with some thoughts. Apple has been rewarded in a way, at least by Wall street for being a bit behind in AI. At least so far their stock is at an all time high. But I wonder, the holy grail for Apple with AI is sort of the ultimate user lock in, right? If you integrate AI into the phone and then users start using it daily, hourly for everything in their lives. Well, let's say Google does come out and say whoa, hundreds of millions of people are now using Gemini every day and they're integrating it with their Gmail, their docs, their calendar, et cetera. And then they start making the pitch. You know, if you came over to Android or better, a Pixel phone, you can integrate all of your Gemini stuff more easily. That would be a compelling reason for people to switch away from iOS. The first really compelling reason for years, if I'm being honest. If Gemini becomes integral to people's daily lives, that could start putting pressure on Apple in a big way. To paraphrase the Onion, screw it, we're doing three Folds Samsung has unveiled the Galaxy Z Tri fold with a 6 1/2 inch outer screen, a 10 inch inner screen, a Snapdragon 8 Elite for a Galaxy system on a chip, and a 3.9 millimeter body at its thinnest point. Quoting the Verge, the Tri Folds inner screen measures 10 inches on the diagonal with a 2160 by 1584 resolution and a 120 Hz adaptive refresh rate that goes all the way down to 1Hz. That's a lot of screen. You can run three apps vertically side by side on it, and even use Samsung's Dex desktop environment in a standalone mode without a separate display. On paper, the Tri Folds outer screen looks a lot like the one on the Z Fold 7. It's a 6 1/2 inch 1080p display with a 20:1 by 9 aspect ratio. Each of the trifolds three panels has a slightly different thickness. The center panel is the thickest at 4.2 millimeters, and it houses a USB C port on the bottom edge. The Thinnest panel measures just 3.9 millimeters thick, including a physical SIM tray, and the other panel is 4 millimeters thick. Those two sides fold inward over the center panel, unlike Huawei's Mate xt, which folds in a Z shape and uses part of the inner screen when folded. Samsung says that the main display undergoes a 200,000 cycle multifolding test, equivalent to folding the device approximately 100 times a day for five years. The tri fold measures 12.9 millimeters thick when it's folded, 4.7 millimeters thicker than a Samsung Galaxy S25 Ultra. It's also thicker than a Z Fold 7, which is 8.9 millimeters when you fold it, but it's not too far off the previous Z fold 6, which is 12.1 millimeters when folded. Although it folds differently, the Z Trifold is pretty close in size and weight to Huawei's Mate XT and most recent xts. The Z Tri Fold is just a little thicker when folded, 12.9 millimeters versus 12.8 millimeters and weighs 309 grams compared to the 290 grams 98 gram XT. With all that's going on inside the Trifold, Samsung has still managed to squeeze in three rear cameras, a 200 megapixel wide angle, a 12 megapixel ultra wide and a 10 megapixel 3x telephoto. Both the COVID screen and the inner screen include a 10 megapixel selfie camera as well. Each of the phones panels houses a battery as well, adding up to a 5,600 milliamp hour capacity. The whole thing is powered by a Snapdragon 8 Elite for Galaxy chipset like the S25 series and includes 16 gigabytes of RAM. One thing I'm not seeing on the Tri Fold spec sheet? S pen compatibility. Samsung spokesperson Elise Sembach confirmed to the Verge over email that the Tri Fold lacks support for the company's Bluetooth stylus. The Z Fold used to include stylus support, but that ended with the most recent Z Fold 7. It'll launch first in South Korea on December 12, with a US launch plan for the first quarter of 2026. There's no US price just yet, but it'll cost about $2,500 when you convert for Korean currency for the 512 gigabytes of storage when it launches. So you should probably start saving your pennies and nickels for this one, end quote. Odd that they pre announce this without doing an event. The last thing you want as an IT professional is for an auditor to catch something before you do. That's why yes, ID automates provisioning, deprovisioning, access reviews and SAS discovery through the lens of security and compliance, giving IT leaders full visibility, consistent policy enforcement and an audit ready posture. Yes, ID delivers advanced IAM automation without forcing teams into legacy identity providers. Whether you use Google Workspace, Microsoft 365 or Okta, Yeshid integrates directly. Plus Yeshid's flexible group and role model gives it precise control over access provision automatically when users join a group or only on request. That reduces license waste and maintains least privilege. Learn more@yeshid.com techbrew that's yeshid.com techbrew. To say compliance is complicated is an understatement. Constantly worrying about SOC2, ISO, HIPAA, CMMC, FedRamp, and more can leave your head spinning. Even worse, a misstep can get pretty costly for your startup. That's why Delve is designed as an AI native compliance platform. Delve uses AI agents to handle headaches like taking repetitive screenshots for you, monitoring your tech stack for security gaps in real time, auto filling security questionnaires in your browser or in CSV, and creating secure data rooms to send to prospects and auditors. Their team can personally work with you in Slack to get you 100% compliant and manage your whole audit for you. Over 1000 of the fastest growing companies get compliance done in Delve, including lovable, bland, micro one instantly and 11x for listeners. They're offering an exclusive $1,000 discount on any compliance framework. Check out Delve co Morningbrew and start using AI for compliance. That's Delve co Morningbrew. Finally today, back to the obsession of the moment. The whole is Google Ahead in AI thing is sort of like crack for me. I love big strategic horse race tech stuff, as I guess you've noticed over the years. Who's up, who's down, who's screwing up? What is the play here? If you want to kill the competition or catch up, you know who else loves that stuff? Ben Thompson. Well, Ben has weighed in on all of this, quoting Struteckery. The Heroes of the AI story over the last three years have been two companies, OpenAI and Nvidia. The first is a startup called, with the release of ChatGPT, to be the next great consumer tech company. The other was best known as a gaming chip company characterized by boom and bust cycles, driven by their visionary and endlessly optimistic founder, transformed into the most essential infrastructure provider for the AI revolution. Over the last two weeks, however, both have entered the proverbial heroes journey cave and are facing their greatest ordeal. The Google empire is very much striking back. Gemini 3's recent success initially seemed like good news for Nvidia. That analysis, however, missed one important point. What if Google sold its TPUs as an alternative to Nvidia? That's exactly what the search giant is doing, first with a deal with Anthropic, then a rumored deal with Meta, and third with the second wave of Neo Clouds. Many of which started as crypto miners and are leveraging their access to power to move into suddenly it is Nvidia that is in the crosshairs with fresh questions about their long term growth, particularly at their sky high margins if there were in fact a legitimate competitor to their chips. This does, needless to say raise the pressure on OpenAI's next pre training run on Nvidia's Blackwell chips. The base model still matters and OpenAI needs a better one and Nvidia needs evidence one can be created on their chips. What is interesting to consider is which company is more at risk from Google and why. On one hand Nvidia is making tons of money, but if Blackwell is good, Vera Rubin promises to be even better. Moreover, while Meta might be a natural Google partner, the other Hyperscalers are not. OpenAI meanwhile is losing more money than ever and it's spread thinner than ever even as the startup agrees to buy evermore compute with revenue that doesn't yet exist. And yet despite all that, and while still being quite bullish on Nvidia, I still like OpenAI's chances more. Indeed, if anything, my biggest concern is that I seem to like OpenAI's chances better than OpenAI itself. If you go back a year or two, you might make the case that Nvidia had three motes relative to TPUs, superior performance, significantly more flexibility due to GPUs being more general purpose than TPUs and CUDA and the associated developer ecosystem surrounding it. OpenAI meanwhile had the best model, extensive usage of their API and the massive number of consumers using ChatGPT. The question then is what happens if the first differentiator for each company goes away? That, in a nutshell is the question that has been raised over the last two weeks. Does Nvidia preserve its advantages if TPUs are as good as GPUs? And is OpenAI viable in the long run if they don't have the unquestioned best model? Cuda meanwhile has long been a critical source of Nvidia lock in both because of the low level access it gives developers and also because there is developer network effect. You're just more likely to be able to hire low level engineers if your stack is on Nvidia. The challenge for Nvidia however, is that the big company effect could play out with Cuda in the opposite way to the flexibility argument. While big companies like the hyperscalers have diversity of workloads to benefit from the flexibility of GPUs, they also have the wherewithal to build an alternative software stack that they did not do so for a long time is a function of it simply not being worth the time and trouble. When capital expenditure plans reach the hundreds of billions of dollars, however, what is worth the time and trouble changes. ChatGPT, in contrast to Nvidia, sells into two much larger markets. The first is developers using their API, and according to OpenAI anyways, this market is much stickier and reticent to change. Which makes sense. Developers using a particular model's API are seeking to make a good product. And while everyone talks about the importance of avoiding lock in, most companies are going to see more gains from building on and expanding from what they already know. And for a lot of companies, that is OpenAI winning business. One app by one will be a lot harder for Google than simply making a spreadsheet presentation to the top of a company about upfront costs and total cost of ownership. Still, API costs will matter, and here Google almost certainly has a structural advantage. The biggest market of all, however, is consumer Google's bread and butter. What makes Google so dominant in search, impervious to both competition and regulation, is that billions of consumers choose to use Google every day. Multiple times a day, in fact. Yes, Google helps this process along with its payments payments to its friends. But that's downstream from its control of demand, not the driver. At one point, it didn't seem possible to commoditize content more than Google or Facebook did. But that's exactly what LLMs do. The answers are a statistical synthesis of all of the knowledge the model makers can get their hands on and are completely unique to every individual. At the same time, every individual user's usage should, at least in theory, make the model better over time. It follows, then, that ChatGPT should obviously have an advertising model. This isn't just a function of needing to make money. Advertising would make ChatGPT a better product. It would have more users using it more, providing more feedback. Capturing purchase signals not from affiliate links but from personalized ads would create a richer understanding of individual users, enabling better responses. And as an added bonus, and one that is very pertinent to this article, it would dramatically deepen OpenAI's moat. It's not out of the question that Google can win the fight for consumer attention. The company has a clear lead in image and video generation. Google is also obviously capable of monetizing users, even if they haven't turned on ads in Gemini yet, although they have in AI overviews. It's also worth pointing out, as Eric Seifert did in a recent Shirtekari interview that Google started monetizing search less than two years after its public launch. It is search revenue far more than venture capital money that has undergirded all of Google's innovation over the years and is what makes them a behemoth today. In that light, OpenAI's refusal to launch and iterate on ads as a product for ChatGPT, now three years old, is a dereliction of business duty, particularly as the company signs deals for over a trillion dollars of compute. And on the flip side, it means that Google has the resources to take on ChatGPT's consumer lead with a World War I style war of attrition. OpenAI's lead should be unassailable, but the company's insistence on monetizing solely via subscriptions with a degraded user experience for most users and price elasticity challenges in terms of revenue maximization is very much opening up the door to a company that actually cares about making money. To put it another way, the long term threat to Nvidia from TPUs is margin dilution. The challenge of physical products is that you do have to actually charge the people who buy them, which invites potentially unfavorable comparisons to cheaper alternatives, particularly as buyers get bigger and more price sensitive. The reason to be more optimistic about OpenAI is that an advertising model flips this on its head. Because users don't pay, there is no ceiling to how much you can make from them, which by extension means that the bigger you get, the better your margins have the potential to be and thus the total size of your investments. Again, however, the problem is that the advertising model doesn't exist for OpenAI. Yet. I understand why the market is freaking out about Google. Their structural advantages in everything from monetization to data to infrastructure to R and D is so substantial that you understand why OpenAI's founding was motivated by the fear of Google winning AI. It's very easy to imagine an outcome where Google's inputs simply matter more than anything else. Google already has done this once. Search was the ultimate example of a company winning an open market with nothing more than a better product. Aggregators win new markets by being better. The open question now is whether one that has already reached scale can be dethroned by the overwhelming application of resources. End quote. So I am obsessed with Pluribus, the TV show that's on Apple tv. Like obsessed? Highly.
