Transcript
A (0:00)
Today on the AI Daily Brief, AGI timelines are moving forward with implications for global AI policy. Before that in the headlines, Google's AI lead says that there are no plans for ads in Gemini. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, KPMG section, zencoder and Super Intelligent. To get an ad free version of the show, go to patreon.com aidaily brief or you can subscribe on Apple Podcasts. And if you are interested in sponsoring the show, send us a Note@ SponsorsIdailyBrief AI. You can also visit AidailyBrief AI to find out anything else you might need about the show. You can get access to our new superintelligent Compass Beta, learn more about our forthcoming AI DB intel product, or even join our free AI builder community. With all that out of the way though, let's look over to all of the conversations coming out of Davos. Welcome back to the AI Daily Brief Headlines edition. All the daily AI news you need in around five minutes. Today's main episode is all about comments from Davos, and actually that's where our headlines begin as well. One of the big conversations for the past week or so has been OpenAI's plans to introduce ads into ChatGPT. Now, I did an extensive show about this earlier in the week, but one of the major points of conversation, especially on places like Twitter X, was how ads impacted the competitive dynamics. And specifically, would it be an advantage for Google either a in that perhaps because of their deep capitalization and balance sheet, they wouldn't have to do ads in Gemini or b because they have more experience with ads. Well, speaking with Alex Heath of sources, DeepMind CEO Demis Hassabas says at the moment Google doesn't have any plans to bring advertising to gemini. Commenting on ChatGPT ads, he said, it's interesting they've gone for that so early. Maybe they feel they need to make more revenue now. The comments do buck a string of recent reporting around Google's plans. In December, for example, Adweek reported that Google had told advertising clients that ad placements in Gemini were targeted for a 2026 rollout. That reporting was sourced from at least two advertising clients who requested anonymity to discuss the meetings. They said that Google had not shared prototypes or specifications for how ads would appear in Gemini, suggesting the discussions were still in a very early stage. And yet the reporting was clear that this was about ads directly in the chatbot rather than appearing through the use of AI mode in search. Speaking with Business Insider last week, Dan Taylor, who is Google's VP of global ads, said there were no plans for ads in the Gemini app and elaborated on the distinction between Google's businesses. Search and Gemini, he said, are complementary tools with different roles. While they both use AI, search is where you go for information on the web and Gemini is your AI assistant. Search is helping you discover new information which can include commercial interests like new products or services. We see Gemini is helping you create, analyze and complete that. However, he did note that AI mode in search in Gemini are slowly converging with the introduction of AI shopping features. Google is already offering ads in AI search, including a new feature called Direct Offers that presents a personalized discount in AI mode. I think it's an interesting choice to fully deny that they've got these plans. While on the one hand I do believe that Google may see an opportunity to win some margin off of ChatGPT by holding out longer on ads, I don't think there's any chance in the world that Gemini's free version stays forever ad free either. But who knows, just holding out for a year depending on consumer response to these ads could be enough to make a difference. Next up, Meta is rumored to be scaling back their in house chip program. Last we heard about the program in August, design had been completed in collaboration with Broadcom and Meta was ramping up orders. In November, the Information reported that Meta was in talks with Google to order billions of dollars worth of their tpusc. That potentially signaled a pivot away from their custom silicon, but the reports were very thin. Now analyst Jeff Pu of High Tong securities reports in a research note that Meta is deprioritizing their deployment of custom silicon. Pooh notes that this lines up with a broader shift where the hyperscalers are more focused on immediate compute needs than self sufficiency. Still, Metta is reportedly looking for ways to avoid paying the Nvidia tax. The latest report suggests that instead of looking to become one of Google's first large TPU customers, they are instead placing large orders from AMD's latest chips, who claim that this isn't a full replacement of Meta's fleet, but rather a strategic purchase to meet short term requirements more efficiently. He reported that Meta could still deploy their custom silicon at a later date with a focus on specialized workloads. I think that the more interesting conversation is what this implies around a shift Overall alongside Meta, OpenAI and Anthropic launched custom silicon programs last year with an aim to reduce reliance on Nvidia and amd, but it seems increasingly unlikely that these custom silicon initiatives will make sense in the context of rapidly accelerating compute needs. Some are even questioning whether there's any financial benefit to developing an in house chip, with investor Nicolaes Godoness posting AMD's total cost of ownership and performance per watt in their latest chips beats out anything Meta can do internally, and TPUs apparently too. Last year was all about how Nvidia and AMD could see erosion of market share. Now it seems the hyperscalers won't have the luxury of seeking alternatives and and could fall back on established players to keep up with demand. In partnership news OpenAI has signed a three year deal to integrate their AI models into ServiceNow's platform. The Wall Street Journal reported that ServiceNow users would be able to choose OpenAI's models within the platform, and the deal would involve a revenue commitment from ServiceNow, OpenAI CEO Brad Lightcap told the Journal. Enterprises want OpenAI intelligence applied directly into ServiceNow workflows. Looking ahead, customers are especially interested in agentic and multimodal experiences so they can work with AI like a true teammate. Inside ServiceNow, ServiceNow President Almit Zavri said the integration will go way beyond backend optimizations. He said that OpenAI's computer use agents will be granted access to IT tasks, like restarting a computer remotely, essentially allowing them to function as automated IT support. Xavri said the agents could also help companies access data stuck in legacy systems like mainframe computers. The computer use models are basically now doing this through learning and feeding it back into the ServiceNow workflow platform. I think we're going to learn a lot this year about exactly how the agentic business model is going to shake out. It is a very different approach to try to integrate your technology inside other delivery platforms like ServiceNow versus just trying to be the ServiceNow. I don't think it's clear exactly how that plays out, but I think there's going to be a lot of experiments this year. It also, however, continues to be a land grab for enterprise business and I expect that to just do nothing but ramp up throughout the year. Lastly Today, one more OpenAI report. We have of course been tracking closely when OpenAI's first hardware will come out, and apparently it's set to be unveiled later this year. In an onstage interview with axios at Davos, OpenAI Chief Global Affairs Officer Chris Lehane flagged that devices was a big theme for the company moving forward. He said that OpenAI was, in his words, on track to unveil their device in the latter part of 2026. Now, he was careful to caveat almost everything about the device rollout. He refused to discuss form factor, and he wouldn't commit to this being a product release timeline rather than just an unveiling. He added that this year was, quote, most likely, but we'll see how things advance. When the interviewer tried to present this as breaking news that we'd get the device this year, Lehane tried to correct him, adding, I didn't say it's coming this year. I said we're on track now. It's unclear if Lehane's comments refer to the original Puck design, the recently rumored behind the ear capsule shaped device, or a third different thing. In reporting the news, Gizmodo said, no, there have not been any updates about what the hell it is. However, that was far from the only thing that we got at the World Economic Forum. And so with that, we'll close the headlines and move on to the main episode. Hello friends. If you've been enjoying what we've been discussing on the show, you'll want to check out another podcast that I've had the privilege to host, which is called you'd can with AI from kpmg. Season one was designed to be a set of real stories from real leaders making AI work in their organizations. And now season two is coming and we're back with even bigger conversations. This show is entirely focused on what it's like to actually drive AI change inside your enterprise and as case studies, expert panels, and a lot more practical goodness that I hope will be extremely valuable for you as the listener. Search you can with AI on Apple, Spotify or YouTube and subscribe today. Here's a harsh truth. Your company is probably spending thousands or millions of dollars on AI tools that are being massively underutilized. Half of companies have AI tools, but only 12% use them for business value. Most employees are still using AI to summarize meeting notes. If you're the one responsible for AI adoption at your company, you need section Section is a platform that helps you manage AI transformation across your entire organization. It coaches employees on real use cases, tracks who's using AI for business impact, and shows you exactly where AI is and isn't creating value. The result? You go from rolling out tools to driving measurable AI value. Your employees move from meeting summaries to solving actual business problems, and you can prove the roi. Stop guessing if your AI investment is working check out section@sectionai.com that's s e c t I o n a I.com if you're using AI to code, ask yourself are you building software or are you just playing prompt roulette? We know that unstructured prompting works at first, but eventually it leads to AI slop and technical debt. Enter zenflow. Zenflow takes you from vibe coding to AI first engineering. It's the first AI orchestration layer that brings discipline to the chaos. It transforms freeform prompting into spec driven workflows and multi agent verification where agents actually cross check each other to prevent drift. You can even command a fleet of parallel agents to implement features and fix bugs simultaneously. We've seen teams accelerate delivery 2x to 10x. Stop gambling with prompts. Start orchestrating your AI. Turn raw speed into reliable production grade output at Zenflow Free today's episode is brought to you by my company Superintelligent. In 2026, one of the key themes in enterprise AI, if not the key theme, is going to be how good is the infrastructure into which you are putting AI in agents superintelligence. Agent readiness audits are specifically designed to help you figure out one where and how AI and agents can maximize business impact for you, and 2 what you need to do to set up your organization to be best able to leverage those new gains. If you want to truly take advantage of how AI and agents can not only enhance productivity, but actually fundamentally change outcomes in measurable ways in your business this year, go to be super AI. Welcome back to the AI Daily Brief. Right now the annual World Economic Forum is going on in Davos, and as much as people love to hate on the event, it is a good chance every year to see the pulse of where the conversation is among global leaders. And while this year of course, much of the conversation is focused around Greenland, there is another profound shift that is also getting a significant amount of airtime, which is of course AI. But not just AI in general, but specifically the way that timelines are accelerating. Both Anthropic's Dario Amadei and Google DeepMind's Demis Hazabas had numerous interviews yesterday. In fact, Dario almost feels like he's on a little press tour and let's just say many of the headlines were pretty significantly attention grabbing for both of these folks. AGI timelines are shifting forward. Now Demis has it on a five year timeline and I think overall sort of gives the impression that his sense is that the last mile to AGI is perhaps more difficult than we give it credit for, in other words, not just a matter of throwing more compute and recursively self improving code. Dario, on the other hand, thinks that things are coming much more quickly. He's putting AGI on much closer to a two year timeline and honestly one gets the impression when watching these interviews that he actually thinks it's even closer than that and that the two year timeline almost feels like him hedging to not sound insane. This, I think is important context for some of the comments that got the most attention, which came when Amodei said that he believed that selling chips to China was akin to selling nukes to North Korea. Now these comments came during a joint interview with Demis Hasabis, during which of course, the Trump administration's recent approval of Nvidia selling advanced chips to China was a major topic of conversation. Amade argued that the administration was making a in his words, a major mistake that could have incredible national security implications. He said we are many years ahead of China in our ability to make chips, so I think it would be a big mistake to ship these chips. I think this is crazy. It's a bit like selling nuclear weapons to North Korea, amadei continued. The CEO of the Chinese companies say it's the embargo on chips that's holding us back. They explicitly say this and at this point it's basically the only area where we are meaningfully ahead. While DeepMind CEO Hassabis doesn't share Amadei's dire concerns about China, he does think people need to update their mental framework about China's capabilities. He reiterated his notion that China is about six months behind the West. But he also reiterated the fact that he doesn't think that so far the Chinese labs have shown they're able to innovate past what the Western labs can do. He said. They're very good at catching up to where the frontier is and increasingly capable of that. But I think they've yet to show they can innovate beyond the frontier now. Interestingly, all of this brought up the question of how society should respond. And in fact, a couple of times they were asked if they could if they would pause and slow down. Some folks have advocated for a pause to give regulation time to catch up, to give society time to sort of adjust to some of these changes. In a perfect world, if you knew that every other company would would pause, if every country would pause, would you advocate for that?
