Transcript
A (0:00)
Today on the AI Daily Brief why AI Power users are actually working More before that in the headlines Is this the best AI Video model yet? The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, Rackspace Technologies, Assembly, Blitzy and Superintelligent. To get an ad free version of the show, go to patreon.com aidaily brief or you can subscribe on Apple Podcasts. To learn about sponsorship, speaking or any of the various initiatives going on in and around the AIDB world, go to aidailybrief AI One of the ongoing conversations when it comes to the China USAI race is not just how far behind China is, but when and if it will cross the barrier of actually being able to innovate ahead of the US rather than just catching up quickly. For some, the release of a new video model suggests that that threshold has been crossed. TikTok parent company ByteDance has surprised with a new video model that absolutely seems to push the state of the art. The model, called Seed Dance 2.0 was released without fanfare on Monday. The early demos are fairly incredible. Menlo Ventures dededoss presented a series of examples in a thread capturing numerous styles and there really is range here. There's a Pixar scene, a product launch video with not just coherent text and animations, but actually impressive graphics, a Goku cartoon scene and many more, Didi wrote. China's ByteDance just dropped the most advanced video generation model in the world. Seed Dance 2.0 has native audio gen, drastic step up from VO 3.1 and Sora 2 in quality, supports multimodal input, 2K resolution, goes beyond cinematic video and can do product demos as well. And it's really hard to tell it's AI. In addition, it appears the model is capable of generating 15 second clips with multiple cuts. Ray Diao, a former Google senior engineer, commented, what actually sets this apart is native audio visual cogeneration. Competitors handle audio in post production, but ByteDance generates it alongside the video. And this is one of, if not the first time that Chinese models have added sound. Watching some of the videos, the perfect lip sync and immersive sound are part of what make the new model stand out. After taking the model for a test drive, 36Kr wrote, the original sound experience is truly different from added voiceover. It shows that AI is not just creating pictures, it understands what's happening in the picture and knows what sound should be made in that environment, this is quite interesting. The review noted amazing character consistency, fantastic physics, and the ability to prompt the model to storyboard across multiple cuts. They dinged the model for dialogue quality, but overall it was a minor gripe. Alongside the model, ByteDance included their own interface to make getting started much easier. Previous Chinese video models have typically been API only, making access a little more limited. Right now there are examples just absolutely flooding the Internet and I wouldn't be surprised if this meaningfully increases the timeline of when we see the next VEO or sora. Now moving to a totally different topic, back to the US and two Data Center Politics the White House is pushing AI firms to sign a pact on community protections around data center development. Politico reports that the Trump administration is seeking commitments from tech giants on principles for the AI buildout. They obtained a draft document spelling out the agreement, citing two anonymous administration sources. The pact is designed to ensure that data centers do not raise household electricity prices, strain water supplies or undermine grid reliability. Primarily, the tech companies will be pledging to bear the full cost of infrastructure upgrades and new power generation required to support their data centers. The administration is said to be planning to roll out the agreement in a splashy White House event, which is yet to be announced. Now administration officials speaking on the record said that the draft pack was outdated, but declined to provide details of changes. Over in our SaaS pocalypse watch Monday.com is the latest victim. Monday fell by 21% on Monday after the company issued weak guidance as part of their full year earnings report. Revenue guidance was between 338 million and 340 million for the current quarter, falling slightly short of analyst expectations of 343 million. Income also fell way short of expectations. Like we've seen from some of these other stocks that have been hit, it's not that the report itself was terribly bad, but revenue grew by 25% over the past year. But their 2026 revenue forecast was cut by a third since their investor day last summer, and the company withdrew 2027 guidance entirely. While CO CEO Aaron Zimin tried to comment that quote, we don't see any impact currently from any AI company and we're shifting our product regardless to be more AI. Native investors clearly weren't buying it. Overall, Monday stock is down more than 45% this year. Now, it doesn't help that for many, Monday.com is the poster boy for a company that's set up for AI disruption. Indeed, last week, as part of her coverage of the software crash, CNBC reporter Deirdre Bossa tried to recreate the platform to demonstrate that Vibe coding isn't quite there yet, and was shocked to discover that Claude Cowork managed to deliver a functional duplicate that suited her needs in under an hour. I continue to believe that while the magnitude of the sell off may be exaggerated, markets in this case are sniffing out something fairly important. Does that mean though that SaaS is dead? Or that the model of what a SaaS company is is changing so radically we simply don't have the shape of it yet? That's certainly more in line with the example of databricks. This week Databricks announced their next tranche of fundraising and released some serious revenue numbers alongside the company's revenue run rate is up to 5.4 billion, which is up 65% year over year. The fundraising round saw Databricks gather 7 billion in fresh capital across debt and equity. CEO Ali Godse framed this round as a rebuke of the death of software narrative, but only for companies willing to make the AI first transition, he said. Everybody's like oh it's SaaS. What's AI going to do with all these companies? For us it's just increasing. The usage of their 5.4 billion in ARR a quarter is attributed to Databricks AI products. The company started making the transition in 2024, recognizing that they needed to build an agentic stack on top of their database product. To that end, they went on an acquisition spree targeting companies that specialize in agent compatible data discovery. DataBricks now has two core product lines addressing the AI transformation and is making the bet that companies won't rip out their SaaS in favor of an in house coded solution. At the same time, they are betting that agentic UXs will will completely change the SaaS business, eliminating the need for clunky front ends or technical skills to query a database. He believes that the big risk for SaaS companies will be clinging to their legacy UX while everyone else goes agentic. One really crazy statistic that was released again shared by Deirdre Bossa. She writes that 80% of databases on Databricks platform are being built by AI agents. Which means, as she points out, AI is building more enterprise software than humans are. So does all this excitement around databricks mean that they are going to be added to the list of potential AI IPOs this year? CEO Ali is reading the room and the answer seems pretty clear now. He said is not a great time to go public. Following up from one of the big controversial stories of this year, OpenAI has begun the rollout of advertising. On Monday, the company announced that ChatGPT users would start to see advertising as of this week. The ads will only appear for logged in free users as well as those using the lower priced $8 Go subscription plus Pro, Business Enterprise and Education. Subscribers won't be served ads. OpenAI says that despite the controversial claims from anthropic super bowl commercial, the ads won't be embedded in the normal ChatGPT session. Instead, the ads are displayed in a separate section in the lower third of the screen and clearly labeled as Sponsored Links. The default settings allow OpenAI to target ads based on the contents of the current and past chats, as well as information stored in memory. However, OpenAI has also gone to pains to provide users control over which ads they see. Users can dismiss certain ads, share feedback, turn off the options for ads to be based on past chats, and completely delete their ad data if they choose. Users can also turn off ads entirely in exchange for reduced usage limits. Basically, all of this seems like the most tentative introduction of ads in the history of the Internet, and the question is, will people actually care for that? We are going to have to wait. And speaking of waiting, although maybe not much longer, rumors are swirling that OpenAI is also preparing for another new model release that could come as soon as this week. The rumors come from cnbc, who got hold of Slack messages sent by Sam Altman on Friday in which he wrote that ChatGPT is, quote, back to exceeding 10% monthly growth and that OpenAI is preparing to launch an updated chat model this week. Now, obviously, since we got 5.3 Codex before we got the rest of 5.3, you got to think it's 5.3for everyone else that's coming. Although OpenAI has to be happy with the performance of Codex so far, Altman tweeted on Monday that more than 1 million people downloaded Codex in the first week, and that Codex saw 60% growth in overall usership last week as well. The battle for the most important AI use case continues, but for now, that's going to do it for the headlines. Next up, the main episode. All right, friends, quick break to talk about a question I hear constantly. How do you actually move from AI experimentation to production without getting buried in infrastructure decisions? That's where Rackspace AI Launchpad comes in. It's a fully managed service designed to help enterprises build, test and scale AI workloads through a guided, phased approach. With AI Launchpad, Rackspace manages the infrastructure, GPUs and core tooling so teams can focus on validating use cases instead of building environments from scratch. You start with a proof of concept, move into a real pilot, and then scale into production on managed enterprise grade GPU infrastructure. Whether you're testing inference at the edge, fine tuning foundation models, or setting up a production pipeline, the goal is the same faster progress with less operational friction. If you're ready to move beyond demos and actually put AI to work, take a look at Rackspace AI Launchpad and see how a managed path to production can accelerate results. Visit Rackspace.com AI Launchpad to learn more. You've heard me talk about Assembly AI and their insanely accurate Voice AI models, but they just shipped something big. Universal 3 Pro is a first of its kind class of speech language model that lets you prompt speech recognition with your own domain context and vocabulary instead of fixing transcripts in post processing. It's more flexible than traditional ASR and more deterministic than LLMs, so you get accurate output at the source and can capture the emotion behind human speech that transcripts often miss. All without custom models or post processing hacks. And to celebrate the launch, they're making it free to try for all of February. If you're building anything with voice, this one's worth a look. Head to AssemblyAI.com freeoffer to check it out. You've tried in IDE copilots. They're fast, but they only see local silos of your code. Leverage these tools across a large enterprise code base and they quickly become less effective. The fundamental constraint context blitzi solves this with infinite code context. Understanding your code base down to the line level dependency across millions of lines of code. While copilots help developers write code faster, blitzi orchestrates thousands of agents that reason across your full code base. Allow Blitzi to do the heavy lifting, delivering over 80% of every sprint autonomously with rigorously validated code, Blitzi provides a granular list of the remaining work for humans to complete with their copilots. Tackle feature additions, large scale refactors, legacy modernization, greenfield initiatives all 5x faster see the blitzy difference@blitzi.com that's B L I T Z Y.com Today's episode is brought to you by Superintelligent. Superintelligent is a platform that, very simply put, is all about helping your company figure out how to use AI better. We deploy voice agents to interview people across your company, combine that with proprietary intelligence about what's working for other companies, and give you a set of recommendations around use cases, change management initiatives that add up to an AI roadmap that can help you get value out of AI for your company. But now we want to empower the folks inside your team who are responsible for that transformation with an even more direct platform. Our forthcoming AI Strategy Compass tool is ready to start to be tested. This is a power tool for anyone who is responsible for AI adoption or AI transformation inside their companies. It's going to allow you to do a lot of the things that we do at Superintelligent, but in a much more automated, self managed way and with a totally different cost structure. If you are interested in checking it out, go to aidailybrief AI Compass, fill out the form and we will be in touch soon. Welcome back to the AI Daily Brief. Today we are talking about a study from a couple of Berkeley Haas professors that was published in the Harvard Business Review. The study is ongoing and comes from Aruna Ranganathan and Shing Chi Maggie Yee. The TLDR is that AI, in fact in a lived context, at least so far, is not reducing work. Instead it is increasing and intensifying it. There is both good and bad news in this, or really a better way to put it might be that it is neither good nor bad a priori, but creates different types of challenges and opportunities. Maybe the best news is that we're finally deep enough into the impact of AI that we can start to respond to what's actually happening rather than what we think will happen in the future. So first, let's talk about this study. Some studies go broad, this one goes deep. The researchers effectively embedded with a 200 employee technology company from April to December of last year. In terms of level setting, the type of company this was, it was not a company where AI use was mandated, but it seems like it was the type of company where employees were going to be positively inclined towards AI in the first place. Now, the research started from the premise that I think is a common feeling that AI can save us a bunch of times by making tasks faster. And of course, anyone who's ever used it to help produce a slide deck or generate an image or crunch some data no knows that on a single task level, that's absolutely true. There is, however, an interesting conundrum that this brings up. If AI lets us do less work, does it also imply that we're less valuable? Will our employers need less of us or fewer of us? This is pretty quintessential to the fear of AI job displacement. And the response to the Genspark super bowl ad showed those fears While some people took it in the spirit that it was offered that AI can make our work lives better, many people thought that this was basically Ferris Bueller telling employers that they didn't need people anymore. Now here's the interesting thing. For anyone who's been eyeballs deep in this agentic shift of the last couple of months. The Claude code opus 4.5codex 5.2 now 5.3 moment that we've been tracking throughout the year on this show, the feeling could not be more the opposite. The new power of having actual productive agents at our disposal has made people feel like they are leaving tons of value on the table. The concern shifts from am I valuable? To am I doing enough? And interestingly, even in the pre shift paradigm, this seems to be closer to what these Haas researchers found. They identified three main forms of work intensification that were exhibited among those using AI. The first they call task expansion. Because AI can fill in gaps in knowledge, they write, workers increasingly stepped into responsibilities that previously belonged to others. Product managers and designers began writing code, researchers took on engineering tasks, and individuals across the organization attempted work that they would have outsourced, deferred, or avoided entirely in the past. And this they identify as part of where the intrinsic reward of AI comes from. They write, generative AI made those tasks feel newly accessible. These tools provided what many experienced as an empowering cognitive boost. They reduced dependence on others and offered immediate feedback and correction along the way. The researchers found that while many of these things started as experiments, they ultimately accumulated into a meaningful widening of job scope. A second category of work intensification they identify as blurred boundaries between work and non work. The way they describe it is because AI made beginning a task so easy. Workers slipped small amounts of work into moments that had previously been breaks. Many prompted AI during lunch, in meetings, or were waiting for a file to load. Some, they write, described sending a quick glass prompt right before leaving their desks so that the AI could work while they stepped away. I know more than a few of you are furiously shaking your heads knowing exactly how that feels. The last major category of work intensification they identified as more multitasking. And basically the idea is that people were doing a bunch of things at once. They discussed manually writing code while AI generated an alternative version, running multiple agents in parallel or reviving long deferred tasks because AI could handle them in the background. So TL Dr. The work intensified and there are very clearly some very good things about this. First of all, people can clearly achieve way more than they did before. That means that organizations can move farther, faster. Also, as the researchers identified, the rewards for expanding your capabilities are not just the satisfaction of knowing you helped your organization. There are intrinsic rewards of feelings of new capabilities and mastery in new areas. For me, I think the biggest one, which the authors don't actually discuss at all, is that this is a fundamental reminder that the aggregate amount of work to be done is not some fixed state. It can always expand up to accommodate more capacity to do the work. And that I believe in key ways, changes the calculus around long term AI job disruption. My belief has always been that in the short term you will of course see organizations use AI to cut costs and do the same with less. I think in many cases they will be rewarded in the short term by markets who like that cost cutting. The winning organizations, however, will be those who use AI to dramatically expand what they do. They will not be focused on doing the same with less, but doing more with the same or way more with a little more. They will be thinking in terms of new product lines, new revenue streams, new categories and markets to expand into. The winners will view AI not as an efficiency technology, but as an expansionary opportunity creating technology. And this points in that direction. Now, as one total aside, by the way, the so called SaaS pocalypse that's happening right now may actually have some interesting impacts on that conversation as well. Given that companies are not only not being rewarded for just cutting costs, they're not even being rewarded for staying on the same revenue trajectories. They need to show how they can fundamentally compete for the long term or suffer massive multiple compression. Anyways, we're not strictly talking about efficiency versus opportunity technology, but that's just an interesting observation I have in the background. So overall, I think what the researchers are finding is actually net positive. However, like I said, it's probably ultimately neither strictly positive nor strictly negative a priori, and instead just demonstrates what the real challenges will be rather than the challenges we had previously imagined. And there are certainly real new challenges that the researchers also identified. As people expanded the tasks that they could do, there were spillover effects to other people who had previously done those tasks, who now had new types of cleanup work. For instance, they write. Engineers in turn spent more time reviewing, correcting and guiding AI generated or AI assisted work produced by colleagues. These demands extended beyond formal code review. Engineers increasingly found themselves coaching colleagues who were vibe coding and finishing partially complete pull requests. The other big challenge, and certainly the one that these researchers are most focused on, was sort of a frog boiling in the pot effect, where people didn't even realize how much less downtime they had and how much expectations around speed of execution had increased without them even realizing, the authors wrote. Some workers described realizing often in hindsight, that as prompting during breaks became habitual, downtime no longer provided the same sense of recovery. As a result, work felt less bounded and more ambient, something that could always be advanced a little further. They also note that over time, the AI rhythm raised expectations for speed not necessarily through explicit demands, but through what became visible and normalized in everyday work. Many workers noted that they were doing more at once and feeling more pressure than before they used AI, even though the time savings from automation had ostensibly been meant to reduce such pressure. Now, the authors provide a couple of different ways for organizations to think about how to build these new challenges into their AI practice. They talk about intentional pauses, sequencing, and human grounding as some of the new management strategies that organizations might need to put into place. But hold aside how we respond to the challenges of this shift, it's very clear that the shift is here and happening. One cannot throw a rock at AI Twitter right now without hitting a post like this one from OpenAI President Greg Brockman. Feels like such a wasted opportunity Every moment your agents aren't running, Ali K. Miller writes now, before every long meeting, I'm forced to ask myself what I want Claude Code or Claude Chrome to do for me during that time. Parallel work can be exhausting. Unclear what the best approach is. Simon Willison wrote a whole blog post about the research and said this captures an effect I've been observing in my own work with LLMs. The productivity boost these things can provide is exhausting. I'm frequently finding myself with work on two or three projects running parallel. I can get so much done, but after just an hour or two, my mental energy for the day feels almost entirely depleted. I've had conversations with people recently who are losing sleep because they're finding building yet another feature with, quote, just one more prompt irresistible. Again, I'm sitting here shaking my head as I think about watching the minutes creep by every night as my bedtime gets later and later as I try to just push a little bit more. The point is that everyone is feeling like this, and according to a new agentic coding trends report from Anthropic, it certainly seems like this is going to accelerate. This report also came out over the last couple of days, and while nominally and in some ways it's about software engineering and the software development lifecycle, it is clearly about much more than that. Now as agentic coding has infiltrated everything and two of them I think set the grounding for who is actually implicated by these trends. Trend 7 is that non technical use cases expand across organizations, that coding capabilities will democratize beyond engineering, that domain experts will implement solutions directly, and the productivity gains will extend across entire organizations. This is obviously happening and it's happening right now. And it means that as we think about how agent decoding is going to change in 2026, the implications are not just for the engineering department, but for all of us. Relatedly, trend five is agentic coding expands to new services and users. One of my predictions for 2026 which that I thought that we would actually see Vibe coders hired specifically to work on non engineering issues. Basically internally deployed Vibers that help people in different parts of the organization use software to solve their problems. To get an early preview of what that might look like, go check out on Lenny's podcast Lenny Richicki's recent conversation with Lazar Jovanovich. Lazar is a full time Vibe coder at Lovable and I think paints a bit of a picture about how this might look in organizations in the future. In in any case, it's quite clear that the shift from assisted AI to agentic AI is exacerbating some of these feelings of needing to be always on and not doing enough and wanting to always have agents running in the background. In short, all of us are now managers and we are all feeling the sting of a big highly capable team that's being underutilized because we haven't gotten it together to tell them what to do. That is going to get worse, not better. Based on anthropics Trend number two Single agents evolving into coordinated teams the rise of openclaw right now is giving us an absolute preview into what this is going to look like. Anthropic's specific prediction is that multi agent systems will replace single agent workflows and that is just happening right now. I've got a thread going on Twitter right now that's basically the Patrick Bateman American Psycho scene where they show off their business cards, except instead we're showing off the mission controls we've all coded to handle our multiple open clot agents. So what does this all add up to? First of all, for those who are worried about AI creating mass job displacement, I think in the long term this certainly puts some evidence in the column that our market system will expand to accommodate all of this new work that is capable. I think that's good news. And while I don't think that we should be Pollyannish about the potential impacts on job displacement in the short term. I think that overall that's good news. But I do also think that these researchers are right to point out that this new capability enhancement is is bringing new types of human organizational challenges, ones which very much need to be dealt with and probably need new structures put in place to deal with. But as I said at the beginning, I'm so glad that we're finally in a spot where we can start responding to what's actually happening, rather than just our future predictions. Big thank you to the researchers for doing this important work. And that's gonna do it for today's AI Daily Brief. Appreciate you listening or watching as always. And until next time. Peace. SA.
