Transcript
A (0:00)
Today on the AI Daily Brief, we're discussing why AI actually won't take your job. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, kpmg, Robots and Pencils, Blitzy and aiuc. To get an ad free version of the show, go to patreon.com aidailybrief or or you can subscribe on Apple Podcasts. To learn about sponsoring the show, sign up. For our newsletter or anything else in the ecosystem, go to aidailybrief AI Today it is a weekend day, which means of course, that this is a big think episode. And today we're taking on a topic that is just about as fraught as anything in artificial intelligence. That is of course, the question of job displacement. Every day there is some news story about a company reducing its workforce, blaming AI, at least in part, or some study which shows all the jobs that could be replaced by AI. And it's not like Americans are particularly comfortable with the state of the economy already. Now, I want to make clear that my argument this episode is not that we shouldn't be concerned at all about jobs. My argument is that in general we're having the wrong conversations about it. So let's talk about a few reasons. Why will AI replace all the jobs? Is the wrong question. The first problem is that it sort of acts as though white collar jobs are the only category that matters. Now, white collar jobs are a big part of the total US workforce. And it is absolutely true that one of the reasons that this particular wave of technology driven job displacement is hitting people so much differently. Frankly, most of the previous tech disruptions that we've experienced or that we've had in our history have hit blue collar and physical jobs first. The fact that AI is on the other hand coming first for white collar jobs is a real reversal of that trend with some fairly big implications. White collar workers are proportionally more economically well off and by extension politically enfranchised. Which to be clear, is not me saying that that's a good thing, it's just the way that it is. And so the potential for backlash and politically potent backlash to AI, I think, goes up. And yet still it's very clear that one of the things that's happening with AI is is that it's reminding people that white collar knowledge work type jobs aren't all that's out there. In fact, it's kind of exposing a little bit, at least that the pipeline to white collar jobs do really well in high school, get into a good college, go massively into debt for your college degree, make it all back with your nice white collar job was broken before AI ever came along. College is too expensive and didn't translate well enough into high earning jobs in general to justify its cost. That was a system that was already on the fritz. Even if AI ends up being the straw that breaks the camel's back, you can see already in the early days of AI that its impact in knowledge work is causing people to reevaluate the very fundamentals of which types of jobs they aspire to. That process could lead us into different places where the desiccation of certain categories of white collar jobs, while impactful, is also mitigated by the shifts that people are making now. The next reason that we shouldn't be asking will AI replace all the jobs? Is that the very question itself is massively over rotating on recent announcements. There of course have been a string of layoff announcements where AI was cited as if not the cause, at least some part of the cause. We had block laying off 40% of its workforce, big layoffs at companies like Amazon, and more broadly, warnings from CEOs around how AI was likely to change the composition of job structures within their companies over time. We've even got scoreboards now of how many jobs have been displaced from AI. And yet there is also some fairly good evidence that AI is at this point being disproportionately blamed, let's say, for job cuts and layoffs that would have happened anyway. A recent resume.org survey of 1,000 hiring managers found nearly 60% said that they emphasize AI's role in layoffs because it is viewed more favorably by stakeholders than saying layoffs or hiring freezes are driven by financial constraints. Meanwhile, only 9% of those respondents said that AI had fully replaced any roles. A Bloomberg opinion piece reads, the reason it works is well understood. Decades of research on how markets react to layoff announcements have established a consistent pattern. Investors punish companies that frame cuts as a response to problems. But when a company frames the same cuts as proactive restructuring, the penalty disappears. The stated reason for the layoff matters more than the fact of the layoff. AI has become the most powerful proactive frame available. Restructuring around AI is a growth signal. We overhied during the pandemic and revenue softened is an accountability signal. So one thing we have to at least consider as we think about how much we should be concerned about AI related job displacement is the extent to which we are experiencing a wave of AI washing right now. A third reason why will AI replace all the jobs is the wrong question to be asking, even though it's being asked a lot, is that there are some who think that we're making some incorrect assumptions about how translatable it is to go from AI's disruption of coding and software engineering to AI's disruption disruption of other types of knowledge work. Recently, Carnegie Mellon and Stanford University released a joint study called how well does Agent Development Reflect Real World Work? The abstract reads, AI agents are increasingly developed and evaluated on benchmarks relevant to human work. Yet it remains unclear how representative these benchmark efforts are of the labor market as a whole. The study, they say, reveals substantial mismatches between agent development that tends to be programming centric and the categories in which human labor and economic value are concentrated. Professor Ethan Malik puts it like all of the effort is going into benchmarking for coding, but that is a small part of the actual jobs people do, which leaves the true trajectory of AI progress less clear. It is no secret at this point that the entire structure of software engineering has changed because of AI. This is the big disruption that we've been living through for the past few months. What's more, we're seeing coding start to impact how knowledge work works in other roles. When everyone can use software to solve their problems, it's going to change the nature of other jobs as well as and yet there's a pretty clear through line that some are assuming from AI can do coding super well to AI can do everything else super well. And there's an argument that particular attributes of coding, for example its ability to have deterministic correctness and a clear right and wrong don't actually apply to other areas of knowledge work which are much more messy and confused and don't have quite the same ability to distinguish correct from incorrect and good from bad. Now, this is a big debate right now, but it's more evidence in the column that all of this is going to be more nuanced than we perhaps think sitting from our seat watching how AI has just cleaved through the old traditional software engineering process. A fourth reason that will AI replace all the jobs is the wrong question is the extent to which it discounts human preference as a market force. I wrote about this when I was stuck in Brazil as part of my 55 hour trip back from South America, with the reflection that although AI was useful as part of throughout parts of that experience, at basically every critical juncture I was looking for access to an Actual human. The reason that I was looking for access to a human is that I didn't want to be subject to the policies as they were written. I wanted to try to talk my way into special treatment. The argument that I was making is that human systems are designed with some amount of discretionary non compliance built in. And if we turn everything over to AI with no ability to use human judgment to make human exceptions, I think systems in general get more brittle. Even beyond that though, a lot of the discourse around jobs assumes that the only function of markets is to be as efficient as possible. But that's means and not end. What markets are actually trying to do is service human desires and human needs. And to the extent that human desires are for other human mediated experiences, it doesn't matter if everything can be more efficient because of AI, markets will organize themselves around provisioning what people want, namely other humans. Again, this is not to say that there won't be AI disruption, but how much it is and in what areas is subject to a lot of forces that aren't just the onslaught of efficiency and productivity. A fifth reason why will AI replace all the jobs is the wrong question. Is that at no point in history has this fear ever been right. At least not in the way that people felt it. There are infinite examples of this, from the Luddites and textile automation, to Mechanized Agriculture, to ATMs and bank tellers, to spreadsheets and accountants, to the Internet and retailers. In each case, people spotted the destruction in creative destruction before they saw the creation. But in each case these were massively market expansionary forces. Now, the way things have happened in the past does not guarantee that they will happen the same way in the future. But certainly the pattern that the fear of technological job apocalypse has never actually played out the way that people feared is worth at least keeping in mind when we're considering how much to fear job displacement. In this AI context. A seventh reason that will AI replace all the jobs is the wrong question is the one that is the anchor of my ultimate optimism, which in short is the fact that capitalism is radically expansionary. I think that the human capacity for stuff of every type, experiences, services, things is basically unlimited. We are voracious, ever expanding demand machines. I think there's even an argument to be made that the market purpose of technology is to expand the capability of markets to meet this unlimited demand. Joshua Back put it a different way. He writes, many people believe that there is a fixed amount of work in the world, and if we give these jobs to machines, humans will not have jobs or starve. This intuitive model of economics is fundamentally wrong. Our wealth depends on the amount and quality of goods and services we can produce and distribute among each other. Automation allows us to make more of everything for everyone. There is always more to do for us, things we could not afford to do before. Automation allowed us to get away from the important drudgery of agriculture, manufacturing and now documenting, calculating, evaluating, memorizing and so on. Proof positive of this to me is that 90 plus percent of my AI use cases are not doing stuff I used to do a little bit more efficiently. It's doing new things that I never could before and the net result of me doing those new things is not extra saved hours and the same amount of things delivered to all of you. My audience. It's a massive expansion in what I am delivering to my audience. There's a reason that in addition to the podcast, there's a Claw camp and an enterprise claw and a super intelligent and an AI DB intelligence and an agent madness. These are things that would not be possible if it weren't for AI. There's also a competitive dimension to this. When Jim Cramer recently asked Nvidia CEO Jensen Huang why companies are laying people off if AI is supposed to make everyone more productive, Jensen responded, for companies with imagination, you will do more with more. For companies where the leadership is just out of ideas, they have nothing else to do, they have no reason to imagine greater than they are. When they have more capability, they don't do more. I've referred to this in the past as the difference between efficiency AI and opportunity AI. I think it's inevitable that we go through a phase where people are focused on doing the same with less. That's efficiency AI. I also think that it is completely inevitable that because of the nature of our expansionary capitalist system, the companies that win in the long term will be those who opt not for efficiency AI doing the same with less, but for opportunity AI. In other words, doing more with the same or doing way, way more with just a little more. I recently put it a little more crassly on Twitter, writing, call me crazy, but I think the companies that give everyone on their team a team of agents are going to kick the crap out of the companies that replace their teams with a team of agents. This is one of my most fundamental beliefs and why I believe in the long run AI will cause a mass expansion of jobs and opportunity and just overall market size. One final reason why will AI replace all the jobs is kind of the wrong question is if that did come to pass. If we all of a sudden saw 15 or 20 or 30% unemployment, we're going to need some totally different structuring of society that doesn't require jobs to be a full participant anyway. In other words, there's no world in which AI replaces all the jobs where society structurally punishes people without jobs. Now, given that this is all a theoretical conversation about what could happen in the future, there are only little drips and drabs of evidence of what that type of societal conversation might look like. But you're starting to see it. Congressman Ro Khanna recently called for a new tech social contract and laid out seven big principles. That same week, Pete Buttigieg talked about the idea of needing a new social contract. If the prognostications of AI job displacement do come to pass, this is the type of conversation that we're going to have. AI replacing all the jobs, in other words, would not happen in a vacuum. All right, folks, quick pause. Here's the uncomfortable truth. If your enterprise AI strategy is we bought some tools, you don't actually have a strategy. KPMG took the harder route and became their own client 0 they embedded AI and agents across the enterprise. How work gets done, how teams collaborate, how decisions move not as a tech initiative, but as a total operating model shift. And here's the real unlock that shift raised the ceiling on what people could do. Humans stayed firmly at the center while AI reduced friction, surfaced insight, and accelerated momentum. The outcome was a more capable, more empowered workforce. If you want to understand what that actually looks like in the real world, go to www.kpmg.us AI. That's www.kpmg.us AI. Most companies don't struggle with ideas. They struggle with turning them into real AI systems that deliver value. Robots and Pencils is a company built to close that gap. They design and deliver intelligent cloud native systems powered by generative and agentic AI with focus, speed and clear outcomes. Robots and Pencils works in small, high impact pods. Engineers, strategists, designers and applied AI specialists working together to move from idea to production without unnecessary friction. Powered by RoboWorks, their agentic acceleration platform teams deliver meaningful results, including initial launches in as little as 45 days, depending on scope. If your organization is ready to move faster, reduce complexity, and turn AI ambition into real results, Robots and Pencils is built for that moment. Start the conversation@ropotsandpencils.com aidaily brief that's robotsandpencils.com aidaily Brief Robots and impact at Velocity Weekends are for Vibe Coding it has never been easier to bring a passion project to life. So go ahead and fire up your favorite Vibe coding tool. But Monday is coming and before you know it you'll be staring down a maze of microservices, a legacy COBOL System from the 1970s, and an engineering roadmap that will exist well past your retirement party. That's why you need Blitzi, the first autonomous software development platform designed for enterprise scale code bases. Deploy the beginning of every sprint and tackle your roadmap 500% faster. Blitzi's agents suggest your entire code base plan the work and deliver over 80% autonomously validated, end to end tested, premium quality code at the speed of computer months of engineering compressed into days. Vibe code your passion projects on the weekend. Bring Blitzy to work on Monday. Cy Fortune 500s Trust Blitzy for the code that matters@blitzi.com that's blightzy.com Regular listeners will know that I've been recently following the new AI agent standard, AIUC1. What piqued my interest initially was a string of leading AI companies like Eleven Labs, Intercom and UiPath announcing their certifications back to back. But what's even more interesting than who's participating is the way that AIUC represents an answer to some of the key enterprise AI adoption challenges that we talk about on the show all the time. First of all, the standard actually keeps up with AI being updated every single quarter. It's comprehensive, designed with over 100 Fortune 500 security leaders to cover all the risks that enterprises care about, and finally, it rigorously tests how agents behave in tricky situations or under adversarial attacks. Unlike other standards that are mostly just about policies, the combination gives enterprises the trust they need to deploy AI agents with confidence. Head to aiuc-one.com if you want to learn more. That's aiuc-one.com. So now that I've discussed why I don't think that the current discourse is maybe the right discourse about jobs and displacement, I acknowledged at the front of the episode that there are important conversations to be had about this, I think it's incredibly important to not be Pollyannish about the changes and to recognize that even if the optimists like me are correct about the long term expansion of jobs in the economic system that AI will represent that is not mutually exclusive, with there being an extremely painful, disruptive, and somewhat protracted liminal in between in which the incredible amount of change does cause its own type of havoc. In other words, AI doesn't have to Take all the jobs for it to cause some challenges in the short term. This, by the way, isn't too far from what people like Sam Altman have said. At the recent BlackRock US Infrastructure Summit, he said, I am not a long term jobs doomer. I think we will figure out new things to do, but I think the next few years are going to be a painful adjustment. And to be intellectually honest, if we are going to plumb from the depths of history, while it is the case that all of these previous technology disruptions that people were worried would be net job destructive were actually massively expansionary, they did still have the impact in the short term, in each case of completely eliminating certain categories of jobs. Textile automation massively expanded the global economy, but it also wiped out a certain class of artisan. Mechanized agriculture helped ensure that people stopped going hungry. But entire communities were uprooted as part of the process. And when it came to ATMs and bank tellers, spreadsheets and accountants, the Internet and retail, in each case there were categories of jobs that did go away. Even if more jobs were later created. We owe it to ourselves, as we have the conversation about job displacement, to move past big bombastic headline grabbing fears and try to get into the nuance of what actually is likely to happen. So let's talk about a set of what I think are better conversations to have about the impact of AI on jobs. First of all, I think we're going to want to try to better map the actual impact. One of the best ways to do that, I believe, is rather than focusing on jobs as the atomic unit of disruption, to instead look at tasks. What tasks will AI eliminate? From there, we can obviously go back and ask which jobs are primarily bundles of those tasks and map overall job exposure that way. But I think we need to engage on a task by task level. And luckily there's some evidence that people are starting to do this. Goldman Sachs, for example, recently put out a study where they focused on tasks as a unit, finding that AI could automate 25% of all work tasks in the US with that, then they looked industry by industry to see what percentage of that industry's tasks were exposed. Now, the results followed a fairly similar pattern to which professions were exposed. But by coming in at it from the task level, you can ask how much are certain jobs likely to shift or be able to shift to other types of tasks? Another important question is to ask in a more nuanced way what AI exposure actually means. There tends to be an assumption when people say that AI is exposed to a task that it means that AI is likely to displace that task. But that doesn't necessarily follow. Chicago Booth professor Alex Imas recently wrote. Exposure does not mean threat of displacement. It can literally mean the opposite. AI exposed jobs may increase hiring and attract higher wages. It all depends on a elasticity of consumer demand and b number of AI exposed tasks in a job. Another important question as we come in at the task level is to understand how much of the AI capability overhang is a factor of time versus something else. You might have seen this chart going around from Anthropic's recent economic research where they mapped the theoretical AI coverage of AI on different occupational categories versus the observed AI coverage of what it was actually being used for. The big story here was that while AI could theoretically do a huge part of jobs like management, business and finance, computer and math, it was being called on only to do a small portion of those things. Is that simply because it hasn't had time to defuse yet and it will ultimately do all those things? Or are there other more structural, more systematic, more human reasons why the observed AI coverage will never match theoretical AI coverage? Another maybe better question than just full on role displacement is what sort of wage pressure the democratization of intelligence might put on different roles. Ex Salesforce AI CEO Clara Shih recently wrote, while full AI role displacement will happen in certain roles, history shows that wage resets are a more common, insidious and often equally disruptive way that new technologies affect workers. A few examples of that she gives include the intra sector squeeze where displaced workers flood the remaining jobs in their own field, compressing wages. A second version of this is labor supply growth outpacing labor demand. Basically, if everyone can do everything, it potentially floods labor supply which compresses wages. And there's also intersector pay cut and spillover effects where displaced high skill workers switch fields, often taking a pay cut while displacing incumbent workers. Think about college graduate underemployment as an example. This question about wage pressure strikes me as much more significant, especially in the short and medium term, than just will AI take your job? Now another whole area of more nuanced and I think more productive conversations around AI and jobs has to do with understanding how surviving work transforms. One obvious question within this is which types of workers are likely to be most resilient to job displacement? Brookings recently released research on exactly this. It tried to put some scoring around US Workers capacity to adapt to AI job displacement. And while there was lots to question in exactly how they designed their methodology. This strikes me as a more productive conversation, especially as we think about policy remediations, than just solely looking at which jobs are likely to be displaced. Basically, if you have two types of roles that are equally displaced by AI, does it make more sense to have policy interventions for the one where the worker being displaced is much more adaptable and resilient to other types of roles, or one where their combination of skills, geographic locale, savings profile, et cetera, makes it much harder for them to adapt? Obviously that latter category might make more sense from an intervention standpoint. Another question around this theme of how surviving work transforms is what do the remaining jobs look like now that everyone can use code and software? I recently asked people on Twitter what had had a bigger impact so far? Software engineers changing how they build thanks to agents or non software engineers being able to build things with code for the first time. It was pretty close at 54 to 46% in favor of non coders being able to code. But it grew when I asked what will have the bigger impact long term, with a full 2/3 thinking that non coders ability to code would have a bigger impact than software engineers changing how they code. This gets to a broader question of just role redesign. While I know plenty of people who don't think that AI is going to replace everyone, I also pretty much don't know anyone who doesn't think that AI is going to impact and change almost every job. Spending more time on how that change is likely to happen and what it transforms roles into feels better than just some blind worry about AI taking the role in the first place. Related to this is, I think, a very reasonable question of how much the average team or company will decline in size. And maybe a better way to put this is if the average amount of work it took to accomplish a goal was X in the past, is the average amount of work to accomplish that goal in the future half of x, 110 of x or 1 100th of x? And based on that, how will it change how we organize working units who work together to accomplish goals? On one end of the spectrum every company could stay exactly the same size and just have a massive increase in output. On the other end of the spectrum we could see a mass compression in the short term of companies first consolidating around smaller teams that accomplish the same set of goals, even if then the second phase is them once again expanding to accomplish new types of goals that weren't possible before with those additional savings. Somewhat related to the question of teams is the Question of how AI will change the balance of power between managers and individual contributors. Thanks to agents, ICs themselves will likely become, at least in part managers or orchestrators of their own small teams, which will fundamentally change how much they can accomplish. That will likely create new types of constraints in and around the remit and flexibility that ICs have to actually go do more. In other words, are they going to be bottlenecked by managerial processes that are designed for a world where any individual contributor could do much less? Palantir CTO Shyam Sankar recently said on tbpn, AI is going to be the antidote to the managerial revolution of the 20th century. He said, all of this power that was sucked away from the frontline workers who actually knew what they were doing to an amorphous blob of middle managers, that's being reversed. All the bureaucracy is getting cut. He gave an example of the military. In the military, he said, I'm seeing incredible AI application developers who are not formally trained computer scientists. What happened? Where did these people come from? I realize they've always been there. The thing is, what would this guy have done 10 years ago? Make a PowerPoint, try to convince some program manager that his ideas were good, only to be told they weren't? Now he just goes away in a corner for two weeks and builds it, and he's arguing about something that's empirical. And the commander is like, this works. Let's go. There are the beginnings of the conversation, particularly around, for example, consulting circles, around how AI changes the org chart. But I think it goes farther than just the org chart. I think it's more fundamental questions of power balance in different categories of organizational leadership. Another incredibly important conversation around the theme of how surviving work transforms is how we recalibrate output expectations. Recent research that we covered on this show has started to suggest that while we thought AI was going to save a bunch of time, it actually in practice is significantly intensifying work. Now, if you've used these tools, it'll probably make sense to you. All of a sudden, everything is on the table. There's nothing that you can't do. At least that's the way that it feels. And so are you just supposed to do everything? What constitutes enough? There's going to have to be a renewed conversation around how much output people are supposed to have, because the ability to output will always be expansionary from here on out. And if we don't manage that within organizations, it'll just cause massive waves of burnout, which of course gets to the question and maybe a third category of what I think are better questions about AI and jobs that has to do with corporate responsibility during this transition. What does corporate responsibility look like now? How do you balance fiduciary responsibility with stakeholder responsibility? One thing that I do think people are right to have identified as having broken is the implicit bargain between companies and their workers. For much of the 20th century, the deal was when the company did well, the employees did well, profits go up, salaries go up, people get raises, people get bonuses. Company not doing so well, bonuses get frozen, raises get frozen. This felt coherent and fair and sort of obvious and intuitive to people. Now, it is obviously overly reductive to say that that's how it always worked, and certainly it was long before AI ever came around that the tension between responsibility to shareholders and responsibility to stakeholders could cause some major challenges. And yet I do think that people, especially over the last couple of years, have felt more like these two things have become completely unmoored from one another. Andrew Yang recently wrote, how humans are doing and how GDP is doing are diverging very sharply. At the end of last year, CBS News wrote a story Corporate profits are soaring even as layoffs mount. Economists call it a jobless boom. Even Fed Chair Jerome Powell said in the FOMC presser this week that the Fed is concerned about the very, very low level of job creation. In fact, he said, if you adjust for overcounting, there is effectively zero net job creation in the private sector. I would argue that we are long overdue for a larger conversation about corporate responsibility. I think that the need for that greatly precedes the rise of Genai. However, Genai is certainly putting quite a fine point on it. Now, the last category of what I would call better questions about AI and jobs comes around the institutional and policy response. One question that I'm acutely interested in is what do actual good reskilling programs look like? Given the speed of change? I believe we are experiencing a huge deficit of good thinking in this area. I would almost go so far as to call us bereft of ideas. Which is not to say that it's not a problem that's been recognized. On Friday, the White House revealed its national AI legislative framework and one of the six key points was about educating Americans and developing an AI ready workforce. The overview says the administration wants American workers to participate in and reap the rewards of AI driven growth, encouraging Congress to further workforce development and skills training programs, expanding opportunities across sectors and creating new jobs in an AI powered economy. And yet, when you read the more extended policy framework, it's clear that they have absolutely no idea what that is supposed to mean. Congress, they write, should use non regulatory methods to ensure that existing education programs and workforce training and support programs, including apprenticeships, affirmatively incorporate AI training. Okay, Congress should expand federal efforts to study trends in task level workforce realignment in order to inform policies supporting the American workforce. Sure, Congress should bolster capabilities at land grant institutions to provide technical assistance, launch demonstration projects and develop AI youth development programs. Fine, all of these things are fine. But boy, are those not an answer to national AI reskilling. And to pretend they are is just absolute madness. We are firmly out of the world where a course delivered by a college or an online set of videos with a badge you slap on your LinkedIn after are anywhere close to dealing with the challenge of actual reskilling people for a totally different type of working. And of course, this extends more broadly into how our main education systems need to change. There is a growing question of whether college is still worth it, and I think probably even that should be redefined to a question of how college could still be worth it. What should it turn into to be once again valuable in an economically accretive way? And then, of course, outside of education, there's even bigger questions about what the right transition intervention programs look like. Should we, as Andrew Yang has proposed, tax the robots? Or also, as Andrew Yang has proposed, should we have universal basic income or universal basic services? If we really are convinced that AI is going to take all the jobs, those are the types of conversations we need to be having. But finally, and very importantly, I think we also need to be in a position to spot the new paths as they arise and to support people's transition to the new type of economy that emerges. Even right now. AI is not only destroying jobs. A recent European Central bank blog argued that companies who are the most AI inclined had actually created more jobs than they had lost. A recent study from Gusto found that small businesses using AI got more productive and hired more people. Anthropic's recent research, where they interviewed 81,000 AI users, found that the people who had already experienced economic benefit from AI skewed heavily towards entrepreneurs, small business owners and workers with side projects. For some, the pattern is clear. An op ed in Eschool News from Thomas Arnett of the Clayton Christensen Institute recently argued that AI may unleash the most entrepreneurial generation we've ever seen. And of course it is absolutely the case that the marginal cost of entrepreneurship, the cost of starting a business not only has never been lower, but is trending towards actual zero. We've seen an incredible increase already in new websites, new iOS apps, new code pushed to GitHub. All of these things are exploding upwards. But as anyone who has ever tried to start anything knows, starting the thing and making the thing successful are not the same. And so if we are turning into an overall more entrepreneurial economy, what types of support do we need there? Whether that's training, policy or something else. What is abundantly clear is that AI is and will change the shape of so many roles. It will impact expectations of output for workers. It will impact their relationship with managers. It will impact the size of teams and companies. It will impact market expectations of what companies should be doing. AI will change a huge amount, in other words, about how work happens and how our economy functions. What it won't do for most people will be to straight up take their jobs. My argument ultimately is that we're moving into a period where we no longer have the luxury of dumb conversations, no matter how good they are for clicks. I am optimistic that we are starting to move into some of these better, more nuanced and more actually useful and productive and directional conversations. And if you have made it all the way to this point in this podcast, I can tell you for sure that you are part of the solution and not the problem. For now. That is going to do it for today's AI Daily Brief. Appreciate you listening or watching as always and until next time, peace.
