Transcript
State Farm Announcer (0:00)
This episode is brought to you by State Farm. Listening to this podcast Smart move Being financially savvy Smart Move Another smart move Having State Farm help you create a competitive price when you choose to bundle home and auto bundling. Just another way to save with a personal price plan like a good neighbor, State Farm is there. Prices are based on rating plans that vary by state. Coverage options are selected by the customer. Availability, amount of discounts and savings and eligibility vary by state.
Brian McCullough (0:34)
Welcome to the Tech Brew Ride home for Friday, November 21st, 2025. I'm Brian McCullough. Today, Google phone users can now work with Airdrop on the iPhone because Google cleverly found a way in. Google might have jumped ahead in the AI race, and Sam Altman knows it. I've heard of quantum computing, but quantum Internets the weekend long read suggestions and at the end, a long rant about my latest AI experiments. Here's what you missed today in the world of tech, Conducting business online can feel a little scary these days, especially with AI creating new opportunities for fraud. In fact, Syllent estimates that AI was behind roughly 20% of the fraud perpetrated in 2024. Spotting bad agentic AI, while allowing good agents to continue with their tasks isn't easy. Thankfully, Mimoto Continuous Captcha can spot malicious agents pretending to be people at the point of account creation or registration. Unlike past Captcha solutions, it runs behind the scenes with no puzzles for users. Momoto is offering techbrew Ride Home listeners early access with a special price for Momoto Continuous Captcha. Right now, our listeners can purchase a year of Momoto continuous captcha for $5,000, a 20% discount on their lowest price plan. To learn more, head to Momoto AI Ride Home. That's Mimoto AI RideHome. Google has suddenly updated Quick Share to work with Apple's Airdrop to make file transfers between iPhones and Android devices easier, starting with the Pixel 10 family. Quoting the Verge. Interestingly, the company engineered this interoperability without Apple's involvement. Google says it works with iPhone, iPad, and macOS devices and applies to the entire Pixel 10 series while limited to Google's latest phones. For now, Google spokesperson Alex Morriconi says we're this new experience to Pixel 10 first before expanding to other devices. In order to send a file from a Pixel 10 phone over AirDrop, the owner of the Apple device will need to change their settings to make their device discoverable to anyone. There's an option to do this with an automatic limit of 10 minutes. Then the Pixel 10 owner should be able to see the device using Quick Share and send it. On the other side, it seems that it'll look just like any other airdrop request that the user can approve to start the transfer. According to support documentation, it goes the other way too. Likewise, the Pixel 10 device will need to be discoverable to anyone or in receive mode. Then the Apple device owner starts an airdrop transfer, the Pixel owner accepts and voila. Cross Platform sharing A post on Google's security blog goes into greater detail about how it's implemented, claiming this feature does not use a workaround. The connection is direct and peer to peer, meaning your data is never routed through a server, shared content is never logged, and no extra data is shared. When we asked Google whether it developed this feature with or without Apple Apple's involvement, Morricone confirmed that it was not a collab. We accomplished this through our own implementation, he tells the Verge. Our implementation was thoroughly vetted by our own privacy and security teams, and we also engaged a third party security firm to pen test the solution. Google didn't exactly answer our question when we asked how the company anticipated Apple responding to the development. Morricone only says that we always welcome collaboration opportunities to address interoperability issues between iOS and Android. The security blog post also details Google's reasoning for why this implementation is secure, along with mentioning an independent security assessment from netspi preemptively pushing back on reasons Apple might cite to block compatibility. Apple hasn't yet responded to our request for comment on this development. Notably, this isn't an Android feature yet. It's currently limited to Google's own phones, and the latest generation at that. Still, it's kind of huge news for Android users. Seamless sharing between Apple devices with Airdrop is one of those extremely helpful features that's been kept inside the walled garden until now. With RCS now widely in use on iPhones, making cross platform messaging easier, it seems like another meaningful step toward lowering those garden walls. End quote. There's been a lot of chatter in recent days about Google potentially suddenly taking the lead in the AI horse race, and according to the information, Sam Altman told OpenAI employees last month that Google's recent Progr AI could create some temporary economic headwinds for our company after OpenAI researchers heard that Google had created a new AI that appears to have leapfrogged OpenAI's in the way it was developed. Altman said in the memo that we know we have some work to do, but we are catching up fast. Still, he cautioned employees that I expect the vibes out there to be rough for a bit. The memo foreshadowed Google's launch This week of Gemini 3, an AI model that software developers say excels in automating tasks related to website and product design as well as encoding a capability that is one of the most important drivers of revenue at AI firms like OpenAI. Altman's comments show that OpenAI's technological lead over rivals like Google and Anthropic has narrowed. Investors have sunk more than $60 billion into OpenAI recently, valuing it at $500 billion on the belief it will continue to dominate the market for developing AI that creates content and reasons the way humans do. That domination is teetering. Anthropic, a four year old firm whose founders previously worked at OpenAI, appears poised to generate more revenue than OpenAI this year from selling AI to software developers and businesses through an application programming interface, the information reported earlier this month. Anthropic's models specialize in generating computer code based on what customers want to develop, from new apps to existing code. Google, meanwhile, continues to use its search app and other products to promote its Gemini Chatbot, which competes with OpenAI's ChatGPT. To be sure, ChatGPT is significantly ahead of the Gemini Chatbot in terms of usage and revenue, but the gap has been shrinking. ChatGPT is AI to most people, and I expect that to continue, altman said in the memo. Google's other advantage is economic. OpenAI is one of the fastest growing businesses in history, going from next to no revenue in 2022 to a projected $13 billion this year. But it also has projected it would burn more than $100 billion in pursuit of human level AI in the coming years, while spending hundreds of billions of DOL servers to do it, meaning it will likely need to raise the same amount in additional capital. Meanwhile, Google, valued at $3.5 trillion, generated more than $70 billion in free cash flow over the past four quarters alone. While ChatGPT looks poised to take a bite out of Google search, Google's financial performance has improved in part because it also has a booming cloud business that rents out servers to large customers, including OpenAI and Anthropic. The financial disparity between OpenAI and established firms like Google has prompted public market investors to question whether the startup's unprecedented revenue growth, including projected growth, will be enough to erase concerns about its future cash burn. Google's turnaround is a vindication for CEO Sundar Pichai and his decision to merge dueling AI labs within Google, as well as to effectively pay $3 billion last year to bring back longtime AI researcher Noam Shazir, who had left to launch a chatbot startup. Altman, in his note, acknowledged that by all accounts, Google has been doing excellent work recently, especially on pre training, the first phase of developing a large language model that can generate text or images. In that phase, AI researchers expose an LLM to data from the Web and other sources so it can learn connections between them. Google's success with pre training in particular came as a surprise to many AI researchers, given that OpenAI at times has struggled to eke out gains from pre training, an issue Google also wrestled with for a while. Those challenges previously prompted OpenAI to focus more on a newer type of AI model known as reasoning, which uses more processing power to produce better answers. And before OpenAI launched its GPT5 model this summer, its employees found the tweaks they made to the model during pre training worked when the model was smaller in size but stopped working as it grew the information previously reported that suggests OpenAI will have to resolve these pre training issues to catch up to Google in that field. Altman last month assured staff that OpenAI would gain ground in the coming months, including with a new LLM codenamed Shallot Pete. In developing that model, OpenAI aims to fix bugs it has encountered in the pre training process, according to a person with knowledge of the model. Altman said he wanted to focus on very ambitious bets technologically, even if it meant that OpenAI would get, quote, temporarily behind in the current regime. They include advancements in using AI to generate data that could train new AI, and post training techniques such as reinforcement learning, which is essentially a way to rate a model's answers positively or negatively so it can learn to improve them. Altman has privately and publicly discussed the company's bet on automating AI research itself as a way to speed up breakthroughs, including in AI's ability to surpass humans in everything from energy and biotech research to healthcare. End quote. According to TechCrunch, Kalshi has raised $1 billion, led by Sequoia and Capital G, at an 11 billion doll valuation, less than two months after it announced a $300 million fundraise at a $5 billion valuation. Again, I keep hearing that the internal usage numbers at these prediction markets are just exploding. This would give credence to that. Kalshi's main rival, Polymarket, was reportedly in talks last month to raise another funding round at a $12 billion to $15 billion valuation, mere weeks after closing a $1 billion round at an $8 billion pre money valuation, Bloomberg reported. Kalshi and Polymarket surged in popularity last year. Markets allowed people to bet on the outcome of the presidential election. These betting sites became even more prominent after they correctly predicted the results of the New York City mayoral election earlier this month. For the Mamdani versus Cuomo race, Kalshee purchased ad space on New York subway cars running live screens that displayed the up to the minute odds of each candidate winning, a marketing campaign that undoubtedly raised the company's brand awareness among New Yorkers. In mid October, the company reached $50 billion in annualized trading volume, marking a more than thousand fold increase from the approximately $300 million volume posted last year, the New York Times reported. Kalshi was co founded by two former hedge fund traders, Tarek Mansour and Luana Lopez Laura. The duo met as undergraduate students at MIT while studying computer science and mathematics. Prediction markets have historically been controversial and subject to legal challenges because they operate in the gray area between financial instruments and traditional gambling. While Kalshi has secured the right for Americans to use its platform after successfully su the Commodities Futures Trading Commission last year, the company is currently engaged in legal disputes with numerous state regulators who claim its activities are illegal gambling. I'm always interested in numbers like this just to remind us kind of where we're at. Quoting Pew Research YouTube and Facebook remain the most widely used online platforms. The vast majority of U.S. adults, 84%, say they use YouTube. Most Americans, 71%, also report using Facebook. These findings are according to a Pew Research center survey of 5,022 U.S. adults conducted February 5 through June 18, 2025. Half of adults say they use Instagram, making it the only other platform in our survey used by at least 50% of Americans. Smaller shares use other sites and apps we asked about, such as TikTok and WhatsApp. Somewhat fewer say the same of Reddit, Snapchat and X, formerly Twitter. This year, we also asked about three platforms that are used by about only 1 in 10 or fewer US adults Threads, BlueSky and Truth Social. 37% of US adults report using TikTok, which is slightly up from last year and up from 21% in 2021. Instagram Half of US adults now report using it, which is on par with last year, but up from 40% in 20 WhatsApp and Reddit. About a third say they use WhatsApp, up from 23% in 2021. And 26% today report using Reddit, compared with 18% four years ago, end quote. The strong growth in WhatsApp usage among Americans lines up with what I've been seeing personally anecdotally lately, but I'm surprised TikTok is that low. Honestly, TikTok seems to drive everything these days, but maybe not among the olds yet. I guess. We've all been there too many SaaS tools, not enough visibility at all, and way too much access for you to keep track of. It's the stuff security nightmares are made of. That's where Trelica by1Password comes in. They inventory every app your company uses and create app profiles to help you easily assess risks, manage access, and make sure your password security is locked down tight. With 1Password's extended access management, you can control your company's many, many SaaS tools securely onboard and offboard your people, and actually hit your compliance goals. I've been telling you about 1Password Extended Access Management all year, and now Trelica comes along to make things even better. Sleep Easy with Trelica by 1Password Learn more at 1Password.com ride that's 1Password.com ride Keeping pace with data growth in the age of AI is like trying to find enough shelf space after a trip to a big box store. AI and data growth have outpaced the old storage model. Manual management of traditional storage can't keep up, so it's time for a new, unified approach from Pure Storage. They help organizations simplify and automate how data is stored and managed, eliminating silos and putting intelligence at the center of operations. When you don't know where data lives or how it's used, governance slips, visibility, and compliance can become constant challenges. The Pure Storage platform unifies storage into a single, intelligent layer that can turn data into a governed, virtualized cloud of data with guaranteed outcomes. Learn more@PureStorage.com morning brew that's PureStorage.com morningbrew I've been anticipating quantum computing, but it never occurred to me we could also get a quantum Internet. Quoting Reuters, IBM and Cisco on Thursday said they plan to link quantum computers over long distances with the goal of demonstrating the concept is workable by the end of 2030. The move could pave the way for a quantum Internet, though executives at the two companies cautioned that the networks would require technologies that do not currently exist and will have to be developed with the help of universities and federal laboratories. Quantum computers hold the promise of solving problems in physics, chemistry, and computer security that would take existing computers thousands of years. But they can be error prone, and making a reliable one is a challenge that IBM, AlphaBets, Google, and others are pursuing IBM is seeking to have an operational machine by 2029. The challenge begins with a problem. Quantum computers like IBM's sit in massive cryogenic tanks that get so cold that atoms barely move. To get information out of them, IBM has to figure out how to transform information in stationary qubits, the fundamental unit of information in a quantum computer, into what Jay Gambetta, director of IBM Research and an IBM fellow, told Reuters are flying qubits that travel as microwaves. But those flying microwave qubits will have to be turned into optical signals that can travel between Cisco switches on fiber optic cables. The technology for that transformation, called a microwave optical transducer, will have to be developed with the help of groups like the Superconducting Quantum Materials and Systems center, led by the Fermi National Accelerator Laboratory near Chicago, among others. Along the way, Cisco and IBM will also publish open source software to weave all the parts together. We are looking at this end to end as a system rather than two discrete roadmaps, said Vijay Pandey and quoting Silicon Republic. The aim is to ultimately create a network of large scale, fault tolerant quantum computers, enabling them to work together to run computations over tens to hundreds of thousands of qubits. They say such a network would potentially allow problems to be run with trillions of quantum gates, the fundamental entangling operations required for transformative quantum applications such as massive optimization problems or the design of complex materials and medicines. It is one thing to look at linking two quantum computers that are physically close, but IBM and Cisco will explore how to transmit qubits over longer distances, such as between buildings or data centers via, for example, optical, photon and microwave optical transducer technologies. The sheer breadth of the challenge means that the two tech players will collaborate extensively with academia and federal labs in the US as much of the technology required has yet to be created. They say that a future Internet of quantum computers would enable a whole range of new possibilities, from ultra secure communications to precise monitoring of climate, weather and seismic activity, potentially by the late2030s. End Quote. For the weekend long reads. I have two pieces from the Verge about what it's like to use the new Nano Banana Pro, something that I intend to delve deeply into this weekend. And then also from the Verge, a look at the literal new Silicon Valley that is growing in the Arizona desert as chip manufacturing moves there. And then finally from the Wall Street Journal, how the hottest new chicken chain raising canes keeps its secret sauce safe. It involves a lot of spycraft and subterfuge. Okay, no bonus episodes for you this weekend. But if you want to count it this way, I've got like five or six bonus episodes for you if you head over to the Rad History podcast feed. That's right, I punched out five Rad History episodes over the last 10 days. Depending on how things go. This weekend I'll have another one for you. A part one episode looking at the life and career of Phil Hartman. This is the second part of me telling you about my recent AI experiments. I told you yesterday what wasn't quite working for me yet, and here's what I found actually is working. Let me turn down the volume on the music here because this is gonna go for a while. Remember a year ago when I did the 80s 90s podcast? I did about 10 episodes, had a ton of fun, but couldn't do more because I didn't have the. I loved the idea and I had drawn up like 250 topics that I wanted to run through if I could, but I just couldn't. Well, with AI the way it is now, I can. I've made a few breakthroughs with my AI workflow. Let me tell you about them. Breakthrough number one, I trained AI on my writing. I uploaded a PDF of my book and a bunch of other examples of my writing over the years and I set up a project inside ChatGPT and I said, anytime you output text in this particular project, output it in Brian's voice. My writing style that fixed the problem that I used to call middle school book report syndrome. How AI sounded like a middle school book report. The output I'm getting now for podcast scripts is much, much more like what I want it to sound like how I would write it. The second thing I had a breakthrough on is I realized I have a shortcut that's better than AI, and that's the fact that I've written and recorded and crucially edited at this point, approaching 3,000 podcast episodes in my life. So even though I did try to train AI on my voice, we're not quite there yet. And also when I trained it on episodes of this show, I mean, look, I talk way too fast on this show. I talk fast on purpose. I want you to get in and out with the info that you need and move on with your day. But that didn't quite work for a more laid back history show. So even though I might come back to the voice training thing eventually, I thought, screw it. I know how to record and edit audio quickly enough that it's actually better if I don't use AI. It's Actually faster. And I know the pronunciations will be right, the emphasis on words and the emotion will be right. I mean, once I've got a 30 minute script in the can, I can record that in basically 40 minutes and then I can edit it in another 10 to 15 minutes so I can have it all done within an hour, which actually is sometimes faster than the AI can even generate it. Trust me, folks, I have a superpower. I wasn't born with it, but after almost 3,000 episodes, I have it. I edit quickly. So now, okay, for these new episodes, I'm using my voice, not AI. If you listen to any of those Rad History episodes, it's not AI, at least not yet. But then the next and biggest breakthrough was just learning how to prompt. Dude, I spent weeks reading up on best prompting practices. And one of the biggest things TL Dr. Is don't just ask for things you want it to do, also put in things you don't want it to do. So basically I have a whole file of of templates that I can use to prompt depending on my needs. Also, again, setting up a project in ChatGPT and giving it parameters that it always has to adhere to at a higher level, then your lower level prompt is also key. So now I have my prompts in a place where I like the output. Okay, so now I just went back to that list of 250 topics of stuff I wanted to do episodes for Rad History on to see what happened. And what happened is I'm really happy with the scripts it's been producing. Let me give you three examples of why this whole process has delighted me. And this will tie into my previously expressed thesis about AI curation versus AI slop in a second. But I'll come back to that at the end. So, number one example, the Bo Jackson episodes that are live. A year or so ago, I read a biography of Bo Jackson. I loved it, and that inspired me to want to do a Rad History episode or two on him. So cool. Put in the prompts, got decent results, but it was leaving out great anecdotes I remembered from the book. So not an erase and redo thing, but just cracking open the book to find the anecdotes about, say, the home run he hit that may have been like that scene in the movie the Natural. And then put that in because, you know, you don't only have to use the AI, in fact, you shouldn't. I can augment what the AI does to get the things in there that I know I want in there even when. And it's not in there from the AI because the AI can't read my mind. Example number two, depending on how much time I have this weekend, the next episode will be a part one episode on the life and career of Phil Hartman. The output was great for the first script. I read it over yesterday and it got most of what I wanted, but it did hallucinate. No, Phil Hartman didn't do the George H.W. bush impression on SNL. That was Dana Carvey. I know that. You got that wrong, AI So edit that out, rewrite that, et cetera. I know that because I know that, right? I was there. I would never try to do an episode about particle physics or something like that because I know nothing about nothing when it comes to particle physics. I want to use this to do things I mostly know about. And I wouldn't just press a button and write a script without reading it over and editing it and putting in my own spin. Example number three, the most recent episode that's up there right now is about the Memphis group. I learned about them in college when I was briefly an art major. I've read articles about them over the years. I've been to retrospectives of their work in art museums. I don't know everything about them, but let's say I knew 60% about them for the purposes of this project at least. So it was actually fun for me to do that episode in the sense of, as I'm reading the script the AI was producing, I was like, oh, who's that artist? Never heard of her. Let me go down a rabbit hole on Wikipedia. So I was learning at the same time that I was producing. That's kind of where the action is the juice for me, creating and learning at the same time. That's why this is fun. And that brings me to my AI rant. Yes, I too am worried about AI Slop. Yes, I am sure there are many creators out there making podcasts and YouTubes and stuff where they just press a button, never look back at the output, and just publish. But in example one, the Bo Jackson example, I still have a creative vision of what I want, and I'm trying to bend the AI toward that vision. If I can make it work, I do. If I can't, I just rewrite. In example 2, I knew enough about the topic to know when it's getting things wrong. And as another example of that, I prompted the AI to tell the story of Toys R Us and big box stores in the 80s and 90s. Do you trust me? Enough to know enough about that sort of thing to get that right. In example three, and this is the part that is creatively rewarding, part of the creative process of doing history or journalism or whatever is learning something. Yes, the end result is you're hopefully educating your end audience. But every journalist and historian worth their salt knows the thrill of learning something they didn't know while they're doing their work and being chuffed to eventually share that with others. So where are we? Where we are is when I have a spare few hours, I can either play Europa Universalis no Away Satan, Not Today, or I can run through one of these 80s 90s history topics that I wanted to do, and now I can. And it's good. Or at least I leave that to you. But I think it's good. So, boom, thanks to AI, I'm doing the thing I originally wanted to do with the 80s 90s project a year ago. I'm being creatively and intellectually fulfilled. And I would never have gotten to do this without the AI. I wrote my book about the history of the Internet because I wanted to tell the kids coming up behind me what it was like during the dot com era. I spent literally five years reading every book, every article online, doing the Internet History podcast, going to the literal New York Public Library and scanning old issues of Industry Standard and putting them into my OneNote file and then rereading all that. Is what I've described above better or worse than doing all that work? Better or worse for society? I mean, because it's great for me. I'm being creatively and emotionally fulfilled in the same way that I was when I wrote my book. Book. I'm just skipping a really big step. The big time consuming, monotonous step. And so we're back to my idea of AI slop versus I don't know, again, I want to call it AI curation. This is just a new tool, like any other new tool. You can choose to use it or not. You could spend a life learning how to paint as realistically as possible via various best practices and academies, et cetera. But then photography came along. Photography didn't kill art. It just changed how you curated the aesthetic of a representation of reality in an image. So is it only a question of AI slop on one side and human creation on the other? Is it really that binary? Can't there be something in the middle? Something like AI curation? And all I can tell you is that for me, AI is allowing me to scratch my creative itch, be creatively fulfilled in a way that I haven't been for years. AI is a new tool that has enabled that. And I'm not saying that I hate this job, this show, I love this job. But some days, you know, I have to talk about something with you, whether I want to or not. Cause the news is the news. But in the afternoons, I can talk to you about only what I want to talk to you about in that moment. Something like, I don't know, checking the master episode list, the weird career of Pauly Shore, say. Anyway, rad history. Check your podcast app and subscribe if you're interested. I'm not saying I'm going to do an episode every day while I'm on this kick. Probably several times a week, though. You can hear me talk about tech in the first half of the day and then talk about 80s 90s stuff in the second half. It's sort of like when I was doing this show in the morning and the Daily COVID 19 show in the afternoons. The only differences are when I was doing that, I had two other people helping me. Now I have AI. And also this is much more fun to do than a podcast about a pandemic. So talk to you on Monday. Unless you want to hear about the life of Phil Hartman first.
