Loading summary
A
Legal teams face more data and more scrutiny than ever. They need AI built for both. Relativity is the AI platform for legal work, delivering defensible AI that handles the tedious tasks so judgment stays where it belongs with you. Learn more@Relativity.com Ideacast.
B
Marketers know how it feels to invest in something that seemed incredible but didn't live up to the hype. We optimize for impressions, but then don't see any revenue. Instead, get the highest ROAS of all major ad networks with LinkedIn ads, spend $250 and get a $250 credit. Just go to LinkedIn.com IdeaCast Terms and Conditions apply. I'm alison beard and this hbr ideacast. Harvard Business Review recently hosted the HBR Strategy Summit 2026, a day filled with expert advice and guidance from executives and academics. We're sharing the highlights of the event in this special IdeaCast series today. Our final episode of the series is a Masterclass from Andy McAfee, principal research scientist at MIT and co founder and co Director of the MIT initiative the Digital Economy at its Sloan School of Management. In the session, he explains how businesses are building the right cultures to truly succeed with AI. He explains the rise of what he calls geek organizations and the gap that AI is creating between the best and the rest. He also takes audience questions with the help of HBR Editor at Large Adi Ignatius. Here's that masterclass
C
I want to talk about a top of mind topic for a lot of people these days. Who's going to succeed? What's going to distinguish successful AI adopters from the also rans in this era, right where we are now, where there's just substantial uncertainty about what's going on with AI. I want to grab some headlines from just the past couple days that underscore how profound this uncertainty is. I'm going to start with a headline from my co founder colleague, very good economist Eric Brynjolfsson, who wrote an editorial in the Financial Times just a while back saying the AI productivity takeoff is already here. We've been waiting for it. We have been relying on this insight from Robert Solow from a while back saying we see evidence of the computer age everywhere except in the productivity statistics. That has been true for AI. Eric says that that era is finally over and now we can anticipate moving into an era of really high productivity growth because of AI. However, a recent survey of lots of CEOs around the world says not so fast. We're actually not seeing it. When we look around at the ROI of our AI investments at the productivity boost or the KPI boost that our organizations have received from AI, it appears to not be there. Yet another very good economist who is still at mit, Doronasimoglu. You have the sign that he's a pretty good economist because he's got the Nobel Prize. He was just in an interview at MIT Tech Review saying that there's not any productivity benefit from AI. So we're in this era of deep, deep uncertainty. There was one chart that appeared in the FT a little while back that I think summarizes where we are beautifully. It said, okay, look, here are three possibilities. Either AI is going to end scarcity and bring us into an era of perpetual abundance. That's that crazy red line that's going up. Or the Terminator is going to come. It's going to drive us to extinction. That's the line going down. Or maybe it's just going to improve the normal march of productivity grief by some relatively amount. That's that continuation of the trend line. This is a very, very high range of uncertainty, to say the least. I love the way that Derek Thompson framed it in his substack just yesterday. He said, look, nobody knows anything. It is too early in the rollout of modern AI, of generative AI. It's clearly a powerful technology. It can clearly do a lot. There's reason to be optimistic about it, there's reason to be pessimistic about it. It's just too early to know for sure. So I love that way of thinking about it because I do think we're extremely early in this era of, of AI diffusion and its changes are not clear yet, especially when you look hard at the evidence. So what do you do in an era where there's potentially a big deal? A transformative technology perhaps has appeared, is starting to diffuse, in many cases diffusing rapidly throughout organizations, throughout the economy. But nobody knows where this is taking us. Nobody knows anything definitive about where this is taking us. I want to propose a very simple three part playbook for succeeding with AI in the circumstances we're in now where nobody knows anything. The first part of the playbook is you got to make a bet. You've got to commit. I strongly suggest you commit in the pro AI direction. Stop equivocating, stop sitting on the fence. Commit as an organization to AI. And one of the ways you show that commitment is by making AI an okr. In other words, make AI something that you expect out of the people in your organization. Communicate that expectation, make it clear, repeat it over and over again. Eliminate the idea that AI is one of, one of many things that are going on or that it's a flavor of the month technology communicate the expectation. And then the other thing that good OKR companies do, they measure progress toward the goal. If you have a goal about time savings or value savings or just raw amount of AI use, great, set that goal and then measure progress toward it over time. The second really important part of the AI playbook, when nobody knows anything, was driven home to me by Steve Jurvetson, who is a very good venture capitalist, early in both SpaceX and Tesla. And when I was researching the Geek Way, he drove home to me the importance of the of the difference in two schools about how to execute complicated interdependent projects in an environment of high uncertainty. And to oversimplify a little bit, those two schools are the Waterfall method and the Agile method. And Waterfall is an upfront planning heavy method about executing big projects successfully. What Waterfall says is, look, we have to get this right. This is a high stakes environment and the way to do that is to sit around with a lot of smart people, anticipate how it's going to unfold, anticipate every contingency plan for those contingencies, game it out and make sure when you start that you know how it's going to go. The main problem with waterfall is that it doesn't work. And the reason it doesn't work was beautifully summarized by Clay Shirky a while back when he said waterfall is a pledge on the part of everybody involved not to learn anything while doing the actual work. When nobody knows anything, when the future is that uncertain, you cannot plan your way to success. You can't anticipate all the contingencies that are going to come up and what the right response is to all those forks, all those branches, all those contingencies. And so when I interviewed Steve for the Geek Way, and I was incredibly eager to interview him because about 10 years ago I met him in Silicon Valley at a conference and when I said, what's up? Steve said, I'll tell you what's up. SpaceX is going to bathe the world in abundant cheap Internet connectivity from space. They had not yet launched their first Starlink satellite. And I thought Jurvetson was nuts. I thought he had been drinking too much of the Silicon Valley Kool Aid. Because my naive thinking was we're 60 years into the age of space communications. If it were really possible to bathe the world in abundant cheap bandwidth from space, wouldn't one of the incumbents in the aerospace and communications industry have figured that out. Well, none of them did and this complete upstart organization did instead. So it occurred to me that Jervison was seeing some things and was aware of some things that I wasn't. So when I researched the geek way, I asked him for an interview. He very kindly consented. I want to give you a call quote that he laid on me that has stuck with me ever since. And it's about this distinction between a planning heavy, upfront heavy approach like waterfall, and what the geeks do, which is to take a very agile approach, to try things, to get rapid, valid feedback, to incorporate that feedback as possible, and then to go back out in the world and try things again. It's kind of the ultimate industrial strength learn by doing approach. And what Steve said to me is he said, andy, you have to understand the companies in my ecosystem. The way that we learn to build software, the agile way that we started to build software about 25 years ago, that's now the agile way that we build everything. Hardware, software, it doesn't matter. Atoms and bits, it doesn't matter. We take an agile approach because that's how we maximize our learning. And I love this. Jensen said, I sometimes think I have a sixth sense. I can see dead companies, they don't know it yet. They're dead companies walking, but they're the walking dead because they're not responsive enough. And he said, the companies that are part of my ecosystem, the ones that I try to help out, the ones that I invest in, we run circles around these waterfall heavy, planning heavy incumbents. We innovate every couple years on things that take them close to a decade to do. As many of us know, we entered this very new era of software engineering, of coding just a couple months ago with the release of some of these amazing models that are so good at coding. I think if I interviewed Jurvetson again today, he would not talk in the timescale of a couple years. He'd probably talk in the timescale of a couple months. So the premium on learning by doing just becomes higher because the amount of doing that you can do has increased. And the amount of learning that goes along with that is absolutely going through the roof as well. So part two of the playbook is to set up a fast cadence feedback cycle of learning by doing. And then the third part is to spread the best practices that are already in your organization. This was an article from late last year about the power users in any organization and at work Helix, what we found with every single company that we have worked with is that the distribution of AI use follows a very, very particular shape. There's a small number of people who are using it a lot and then most of the people are using it sparingly. And there are a lot of people in every organization we've looked at who are still on the sidelines. They're experimenting, they're dipping a toe in, or maybe they're not even doing that much yet. So there's a small group of power users and there's a large body of people who really could use some help understanding how to put this technology to work. Now to us that implies a gigantic opportunity. Harness what those power users are already doing, harness what they know, harness the best practices that they've figured out, and start to share and spread them, diffuse them as widely as possible. We are in the early stages of an era of a very deep reimagination. Re engineering, use the term that you want. But we're going to change the way that work gets done inside companies and between companies. There are a couple schools of thought about how you accomplish that reimagination. And here again there's a contrast between a planning heavy approach and an iteration heavy approach. The planning heavy approach to reimagination or re engineering says, look, get a bunch of the smartest people you can find in a room with the whiteboard. You diagram out how work is going to get done in the new world of AI. And the geeks say, no, let geeks do what they do, which is grab the powerful new tools, look at problems that need to be solved, put those tools to work and come up with something that, that does the work in a profoundly different way, a much more enabled way. The geeks say that is actually where the reimagination is going to come from. It's primarily a, a bottom up process supported from the top. So you've already got people starting that reimagination work. I think one of the goals of an organization in this era of profound uncertainty is to identify those folk, learn what they're doing, and spread their good ideas as broadly as possible.
A
Andy, that was great and very provocative. I'm going to start with a question or two from myself, but we do have a lot of questions coming in from participants. You know, I guess with AI, you know, with any new technology, obviously it's, it's, is it overhyped, is it underhyped? You know, this one, for the most case, people are, it's like number one, two or three on their kind of Strategy. It's like we're doing this right? We may experiment our way into it, we may go big time, but we're doing this, you know, and you just said even, even, you know, Derek Thompson, I love. Nobody knows anything. But you were kind of urging people to commit, you know, at a certain point. Is that an article of faith that AI is worth the commitment or are you a techno optimist? I mean, how should we think about this? Because you can argue it both ways.
C
I am a techno optimist, Adi, as you know. But we don't just have to take AI on faith. Economists have come up with three ways to identify what they would call a general purpose technology, which is, economists speak for a gigantic big deal for the economy at the level of the steam engine or the internal combustion engine. One of these technologies that comes along and just accelerates the overall growth of the economy. There are three ways to recognize them and you can recognize them in advance. Number one, does the technology itself get better quickly? Check with AI. Number two, does it spawn other complementary innovations? We've already got AA powered self driving cars. We, we are very rapidly trying to build and diffuse AI robots that have AI at the core of what they're able to do. So I think that's probably a check as well. Then the third criteria is does it remain confined to one or two sectors, one or two economies, or do you start to see it being used all throughout the economy? We're already seeing AI, generative AI used throughout the economy. So when I think about those three criteria and this new chapter of AI. Yes, yes and yes. That's where my faith. It's not faith. That's where the evidence falls that I rely on to say this actually is a big deal.
B
Ever invest in something that seemed incredible at first but didn't live up to the hype? Marketers know that feeling optimizing for impressions and then not seeing revenue. Instead, invest in what looks good to your CFO. LinkedIn Ads generates the highest ROAS of all major ad networks. Spend $250 and get a $250 credit. Just go to LinkedIn.com IdeaCast Terms and Conditions apply.
A
Legal teams are under more pressure than ever. More data, more complexity, more scrutiny. They need AI built for the realities of legal work. For more than a decade, Relativity has invested in AI built specifically for legal teams designed to meet legal standards and support defensible decisions. The result is explainable AI that handles tedious tasks, so judgment and critical decisions stay within. They belong with you. Relativity is the AI platform for legal work. Learn more at relativity.com ideacast. All right, so let's project forward. So if AI does what it can do, there are two big questions that occur to me and to a lot of people. If these tools are accessible to everybody, you know, how do we think about durable competitive advantage? Where does that come from? Number one, and then number two, what in the world does this do to management and to layers of middle management?
C
Yeah, those are two big questions, right? And I hear the same thing you do, which is, wait a minute. If the cost of doing difficult things like writing software goes down, doesn't that mean that the competitive advantage of being good at software goes down? I don't think that's the right way to look at it. But the cost of writing software has already been declining. I don't know, is it, is it 10x? Is it a thousandx? I don't know, but it is many. It's at least a couple orders of magnitude cheaper to write software today than it was 10 years ago. So those costs have already come down a lot. Does that mean that everybody is equally good at software today? Absolutely not. The era of powerful software has sharpened, has increased competitive differences between companies. I think AI is absolutely not going to be the great competitive leveler. It's going to make the distinctions between companies much, much bigger than they are, than they are today. We're starting to get decent evidence on this from, from looking at the places where AI has been put to use most heavily, which was in writing software. There's a really interesting pattern and emerging. Artie, you know, you and I both have careers long enough that we've been talking about the network organization and the flattening of hierarchies for a long time. We're finally starting to see that it turns out that as people get access to this incredibly powerful new AI, they tend to do a little bit less of the communicating, collaborating, coordinating work. Now, that work doesn't go down to zero, but it feels like, you know, the administrative busy work of running a project and, and coordinating with other teams, we're handing that off to the technology. And so the human amount of that is going down, while at the same time the evidence is showing that the very best people, the people who you really want to unleash on the big problems that you're confronting, those folk have more time freed up for exploratory work, for innovation, for envelope pushing, which is exactly what we want them to do. I've also heard people say that the skill of running a fleet of agents is Very much like the skill of being a very good manager, especially in a high tech field or a product manager. So I don't think the kind of skills that we learn about running large complicated efforts are becoming obsolete at all. I think they're going to be an incredibly rich blend of human beings involved in that work and pieces of technology involved in that, that work.
A
So let's talk about that. So I'm sure there are some people who are listening who, yeah, yeah, we have agents and I'm very comfortable managing them and maybe in some cases they're managing me. For people where that still sounds slightly, you know, sci fi or just some, at some point in the future talk about that, you know what, what is make us comfortable with the idea of having agents in our work lives.
C
A software agent is just a piece of technology that you don't have to spoon feed as much as we're used to. You can tell it to go do something relatively complicated, hopefully give it pretty clear instructions and it will go off in many cases for hours on its own and then come back when it's got that work done, present its results to you now. You know, we had a couple of things with this, this wild new technology, OpenClaw, where people were letting it maybe have a little bit too much autonomy. Giving it your credit card and telling it to go out and do all of your Christmas shopping for you is probably a very, very bad idea. But with appropriate guardrails in place, we, we can now set up these technologies, let them go do their thing for longer and longer periods of time, let them interact with each other to double check, audit, make sure the guard rules are in place, coordinate their activities and then they kind of come back and give you a progress report it. And very quickly over time, there's more and more of that progress and fewer and fewer of the bugs. Now that doesn't mean that we can just turn agents loose today. That doesn't mean that we're going to be able to do that next month. But, but are these, are these pieces of AI exhibiting more greater and greater ability over time to do increasingly complicated things and in some cases to interact with each other as they do so? Absolutely, yes, in both cases. And nobody that I talk to says that that's about to level off.
A
My guardrail would be, I would give the bot my credit card. I would just say don't buy anything tacky. That's not nice, that's not happening. So, all right, so there are a lot of audience questions coming in. There are a couple that I want to combine here. One is from Moumita, who is a customer success strategist. The other is from Elena, who is an executive director. And they're interested in success metrics and KPI. So as we become more productive and efficient with AI, how should we be thinking about success metrics? Is one question. And for knowledge workers, what do real KPIs look like?
C
Yeah, there are a couple fascinating parts to that question. I break it down into two areas. AI is only going to move the needle on KPIs in an organization on the metrics that we care about if people are using it. So the first thing that we recommend is are you tracking the usage of your AI, as I said earlier, Are you in the case where there's only a really small group of people who have embraced this technology? Do most of them currently work in software engineering? And is the rest of the organization really still on the sidelines? We can measure that. Now what you want is to not just see the power users doing their thing and increasing their use. You want to see broad adoption and you want to see broad increases in use. So the first part of the very rich KPI question is make sure you're correctly measuring your usage and you want that usage to be going up. Now, the reason you're engaging in all of that use and the reason you're paying your AI suppliers is because you want some performance measures to get better. And as the questioner points out, some of those performance measures are pretty straightforward. In a call center, for example, you would like agent customer service rep, productivity to go up, you want call times to go down, you want resolution rates to go up. We've got good metrics for that category of knowledge work. For other categories of knowledge work, it's a lot less clear. I was talking with the head of an investment bank who said, you know, our analysts, what they really do, they're doing very complicated analyses, but it looks like the product is a slide deck that we show to a customer. If I told my agents to be more productive, I hope, or my analysts to be more productive, I hope they wouldn't be just start churning out two times as many decks. So there are parts of knowledge work that are actually pretty complicated to measure. Not impossible, but it's a little bit more subtle. To me. The really good news is that the, the Economist toolkit for measuring changes, even if they're subtle, even if they're a little bit distant from, from the intervention like AI that we're talking about, our toolkit for understanding whether this thing that we invested in caused good things or bad things to happen. That toolkit's really good right now. We put it to work at Work Helix. It's part of a thing called the credibility revolution, where our ability to say, hey, you tried something, did it cause the KPI to move in the direction that you want? Our ability to do that is much, much better than it was even 20 years ago. So put that to work. Put the credibility revolution to work on your KPIs.
A
So one of the issues that leaves people skeptical is work slop. And this is a question, this is from Suzanne, and it has a lot of upvotes. So a lot of us are seeing work slop. And by that we mean AI generated content that looks great but is not so great and maybe creates more work for colleagues. When you look at workslop, do you think of it as a training problem, a culture problem, a strategy problem, something else?
C
It's a problem, It's a problem, and it's a little bit of all of the above. And the solution that I hope we don't head toward is my agent generating slop and your agent trying to filter it out so that the stuff that gets to you is only the good stuff. We don't want slop wars going on inside the enterprise or between companies. I think one of the really important distinctions here, Adi, you've heard of this distinction between a process focused organization and an outcome focused organization. The geek companies that I studied and that I wrote the geek way about, these are very much outcome focused organizations. They understand the need for process, but what they're really about are the metrics that are going to indicate whether the business is heading in the right direction or not. How are those metrics doing? I think there's going to be less demand for work slope in an outcome focused organization than a process focused organization. A process focused organization is did you bring all your deliverables to the next meeting? Did you go down the check mark of everything that we said we were going to do? Did you get everything on everybody else's desk, whether or not that moved a KPI that any customer would care about? In process driven organizations, I think the work slot volume is going to be already pretty high and unfortunately probably heading higher. In outcome focused organizations, I expect that situation to be pretty different. There your job is not to generate documents to satisfy the process. There your job is to get stuff done that makes a customer happy.
A
So here's a question that goes back to your conversation of agile versus waterfall and this is from Rich, who is a founder and a CEO. And this is really a request for, for practical advice. What is one thing that a leader of a traditional company can do to move away from waterfall thinking and that culture to adopting a kind of agile mindset?
C
Wow, that's a great question. The one thing that a leader can do is understand. It's a two part answer. But they're related. So forgive me, is to understand the difference between what Amazon calls one way doors and two way doors. In other words, is can we not make a mistake here? SpaceX has launched a lot of rockets. They have and crashed a lot of rockets because they realized that in many ways launching a rocket is a two way door. Launching a rocket with human beings on it is a one way door. Understanding the difference, crashing that rocket is an unacceptable failure. So SpaceX tries very hard, very successfully, not to crash rockets that have people at the top of them. However, the way you learn how to build those rockets is by learning, trying and failing and being part of an organization that says, look, you built that rocket to learn something. We didn't put any people on it, it crashed. That's actually good. We learned something from that. So part of it is the understanding the difference between one way doors and two way doors and being very clear about that. The other part is for the two way doors, let people try stuff, let your team try stuff. Signal in every way that you can as a leader that failure for two way doors is actually okay as long as you learn something. What did we learn? What are we going to do different next time? Great, let's get back out there and try things again. That's how I think a leader can start to build a more agile organization.
A
I want to talk to you about talent. I want to talk to you about hiring and skills. You probably saw the report that IBM is tripling its entry level positions. It's sort of counterintuitive because we've sort of assumed that AI is going to be wiping out entry level jobs. I think Microsoft has said something similar. What do you think? How should HR be thinking about hiring? About hiring profiles, about skills? People are glibly saying we want more liberal arts majors, it turns out, and I don't know if that's a real thing or it's a sound good thing. But know, what are the skills? What are the profiles? How should companies be thinking about hiring people?
C
For this era, the the reason that a lot of companies pulled back on their entry level hiring is because they anticipated a whole lot of automation via AI. And in particular they anticipated automation that could do the more routine stuff that we relied on junior people to do. Now the problem with that there is twofold. Number one, how else are people going to learn to do the job ex except via on the job learning and training? Apprenticeship. That's how you learn to do difficult knowledge work is by helping somebody who's good at that with the routine stuff. And when we put too much automation in that too quickly, we lose that apprenticeship ladder. The other mistake though, is if you cut off your entry level hiring, you are cutting off the pipeline of the people who are most likely to be AI enthusiasts. AI power users AI enthusiasts in your organization. There is a big demographic falloff as people tend to get older. We tend to be more set in our ways and less willing to try crazy new things like AI. So if you're pulling back on your entry level hiring, you are probably sacrificing future opportunities to learn and the skilled people of the future. You're also turning off the spigot of the most enthusiastic power users of AI in your organization. I think it's for those reasons that I'm not too surprised to hear organizations like IBM and Microsoft are now trying to get a pipeline of of very talented AI forward young people.
B
That was Andy McAfee, research scientist at MIT, giving a masterclass as part of HBR's recent Strategy Summit 2026. If you found this episode helpful, please share it with a colleague. And be sure to subscribe and rate IdeaCast in Apple Podcasts, Spotify, or wherever you listen. If you want to help leaders move the world forward, please consider subscribing to Harvard Business Review. You'll get access to the HBR Mobile app, the weekly Exclusive Insider newsletter, and unlimited access to HBR Online. Just head to hbr.org subscribe thanks to our team, Senior Producer Mary Du, Audio Product Manager Ian Fox, and Senior Production Specialist Rob Eckhart. And thanks to you for listening to the HBR IdeaCast. We'll be back with a regular episode on Tuesday. I'm Alison Beard. Let's be honest, most HR platforms aren't exactly a joy to use. Deal's different. It's AI native, keeps you compliant and grows with your team. Whether you're familiar five people or 50,000 HR, IT and payroll on one platform that just works. See for yourself@deal.com that's D E L.com HBR.
Date: April 2, 2026
Host: Alison Beard (Harvard Business Review), Adi Ignatius (Editor-at-Large, HBR)
Guest: Andrew (Andy) McAfee, Principal Research Scientist, MIT Sloan & Co-founder, MIT Initiative on the Digital Economy
This episode features a masterclass by Andy McAfee from the HBR Strategy Summit 2026, focusing on the critical question: "Who’s going to succeed with AI?" McAfee examines the massive uncertainty surrounding AI's economic impact, the rise of "geek organizations," and what separates winning companies from the rest in the AI era. The discussion moves through practical frameworks, organizational culture shifts, management changes, metrics, and talent strategies for AI adoption—with actionable guidance for leaders navigating the generative AI frontier.
A. Commit & Make AI an Organizational Priority
B. Agile Over Waterfall – Learn by Doing
C. Spread Best Practices – Amplify Power Users
Q (Adi Ignatius): Is betting on AI a leap of faith or techno-optimism?
A (Andy McAfee):
Q: If AI is accessible to all, do competitive advantages erode?
A:
Q: What are “agents,” and how should we get comfortable managing them?
A:
For listeners and non-listeners alike, this episode delivers a robust framework for thriving with AI—centered on clarity, culture, and relentless learning.