
Rodney talks with Section CEO Greg Shove about why enterprise AI adoption is stalling, why “AI layoffs” are often cover, and what it actually takes to redesign work for the intelligence age.
Loading summary
A
When the bubble bursts, all the bozos will get cleared out. All those AI influencers are in my LinkedIn feed every day. They're gonna be gone. That'd be good for my LinkedIn feed at least, you know, and like the people who really give a shit about this and want to work at it, they're gonna keep their heads down and just kind of plug away.
B
Hey everybody. Welcome back to Work with the Ready, a podcast about modernizing organizations as the future of work collides with the present. I'm Rodney Evans and today we gave Sam a break and I am joined instead by someone who I've been want to talk to you for quite a while now, Greg Shoaff. Greg is a seven time founder and CEO who is currently the CEO of Section, an AI workforce transformation company, helping orgs to get real value from their AI investments. He's helped train thousands of leaders and knowledge workers on how to actually use AI in their work and was very recently named one of Edelman's AI creators to know in 2025. Greg, welcome to the show.
A
Thank you Rodney. Great to be here.
B
I am thrilled about this conversation because Greg is really operating at the exact intersection that we are always talking about on this show and that is what the work design is that is necessary to adopt AI and thrive in the intelligence age. But first we're going to do a check in question because we always do. My check in question for today is what's something happening in the AI hype cycle that drives you nuts right now?
A
The AI hype cycle. I mean that's what drives me nuts. You know, Listen, I get it. People need to raise money and to raise money you have to tell a story and about the future and how the future is going to be great. Here's what I really wish. I wish that AI CEOs from the major AI companies would stop saying we should slow down and really contemplate the impacts of what we're doing. Because they say it with such earnestness and they don't have any intention of slowing down. So I wish they would just stop saying it. Don't be insincere and just admit and tell everyone what you're doing, which is you're moving at the fastest pace possible regardless of the consequences because you think it's a war and you want to win. So just be straight up with us, we'll figure it out.
B
Yeah, that's a great answer. My answer is somewhat related, which is so much of what I read and see whether it's about agents or Robots or, or workflows is about all of the things that human beings aren't going to have to do anymore. We're not going to clean toilets and we're not going to drive cars and we're not going to write greeting cards and we're not going to push code. And I rarely see anything about what we are going to do and what is going to be meaningful in this life as all of these pieces of our functionality are stripped away. And I would just, I don't know, I would find it inspiring and compelling to hear a little bit just like 10% of like what that future might be.
A
Well, you know, you don't like the version of that answer, which is we're all going to be poets and sitting on a beach and collecting our UBI check.
B
You know what? I don't buy it, Greg. I don't buy it.
A
Shocker. Shocker. Yeah, yeah. That's because the answer is that no one knows, so they don't want. There's nothing to talk about because no one actually has any idea what's going to happen, which is probably the honest answer.
B
That's fair. So I want to get right into our conversation talking about enterprise AI. The listeners of this show know the statistics around how AI adoption is performing and that the statistics are pretty grim. You know, we see hovering around 10% of ROI on use cases. That's expected. What's going on? Why is it so shit?
A
Sure. Well, a couple of things. First of all, consumers love AI. So let's be clear. When you talk about adoption, let's differentiate between consumer and corporate or enterprise. Right?
B
Yeah.
A
Consumers love AI. This is a revolution that is just getting started in some ways and is gaining momentum, I'd say, on the consumer side. And of course, Silicon Valley is based on an addiction business model. And so Silicon Valley will make a. Is making AI very addictive. So consumers, in addition to loving it, will likely become addicted to it. And we can see that, right. When you get a. When, yeah, when AI comes back and says things like, do you want me to do that or do you want me to make. Make a PDF or you know, create a spreadsheet or make the report, I'm like, yeah, sure, do that. Yeah. And then do you want me to do this? Yeah, do that. Right. Like, right. Just kind of pulling me into its little addictive clause, I guess, on the consumer side. On the enterprise side, it is shit. Adoption is not anywhere near what we thought it might be at this point. It seems to be around 10, 12, you know, 13 14% depending on the organization. I don't think this is rocket science. I think it's simple. Two parts to the answer. The first is employees are getting the gain and they're not sharing it with employers. And so a lot of this data around, there's no roi, I think is, there's plenty of roi. It's the early adopter employee that is getting that roi because they're using AI in their work all the time. But they're not going to share that gain with their employer. And why should they? Because the second part of the answer is everyone else is worried about getting laid off. And so, you know, we have deployed this like it's software. You know, what a mistake. This is not software. This is not ERP or CRM or marketing automation or, you know, this is not software. This is co intelligence. It doesn't behave like software. It's unreliable. It hallucinates. Right. Makes mistakes. It can be incredibly productive and insightful and it can be really stupid. Right. It's sort of a genius and a clown at the same time. But. But it doesn't behave like software because software for most of us hasn't threatened our employment, hasn't threatened our livelihood. So most companies deploy it like software and then they're like shocked that no one's using it because they haven't really answered the question, which you started with, which is, what am I going to be doing after? If I be a good corporate Citizen and use AI every day and find a way to get 6, 8, 10 hours of time back, maybe more, what's going to happen to me? Will I go on to more interesting work or will I get laid off or, you know, or will my team shrink? Because now the team of 10, you know, we don't need 10, we need six. So yeah, this is in a way not surprising. And we're going to have to bust through this anxiety and resistance if we're going to get adoption levels that are 60 to 70 to 80% in a typical large organization.
B
Yeah, I mean, a lot of times in the work that we do, one of the things that I say to C suite executives is when they're telling me about a frustration or a tension that they're feeling in their organization, I usually remind them that the people, the talent force, is probably operating rationally based on the environment that they're in. And what you just described is very rational behavior. Okay, so I've got 10 pocket hours. Where is the incentive to give them back? Like, why should I? A and B, a lot of top Performers, as you and I both know, are punished for their competence. So they have more workload, more roles, less balance than the slackers do. I am imagining somewhere in that population, those people are going like, I'm going to take those hours and like, get a tiny slice of my life back.
A
Yeah, totally.
B
Walk the dog immediately fill it, right?
A
Walk the dog, take a yoga class. I mean, listen, remote work gave us Friday afternoons back. Bottom line, AI is giving us Friday mornings back. If you're one of those early adopters and you figure this out, how to use GPT at work, whether it's sanctioned or not, meaning if it's not sanctioned again, those early adopters are using it anyway and getting those hours back. And to your point, what's the incentive to share that with my employer? Plus, it just takes time. Even if the employee was willing to share that time, if you will back with the employer, it takes time to redesign workflows, redesign teams, as you know, change the composition of the organization. That doesn't happen overnight. Take years for those changes to happen, and then therefore really see the impact of AI in the organization in terms of making it more productive, if in fact that happens. So most of these AI layoffs that we hear about or read about, they're bullshit. There's no way in those organizations AI is that well integrated and changing the nature of work and the nature of teams such that they need to make those layoffs. That's not happening. That's just cover for CEOs that want to do something that's rational as well. Well, every CEO I talk to wants to grow their business at 25 to 30% of the year for the next five years and hold headcount flat.
B
Yep.
A
And they brag about it, you know, in board meetings on the golf course, they brag about in public. You see these CEOs doing this in terms of their earnings calls and they're in their all hands meetings. And I get it. That's rational. It drives your stock price up.
B
Sure.
A
And that's where 90% of your compensation is, if you're an executive. We're gonna have to find a way to get through this, you know, in the next 10 years.
B
The other thing that the sort of behavioral pattern brings up for me, in a lot of very large organizations, there is a level of institutionalized idiocy. We're saddled with organizational debt. We're all just walking through sludge all day because the bureaucracy is crippling. And as a result of that, I have a fair number of interactions with people who are sort of at the middle to upper strata of organizations who have let their muscles for strategic activities atrophy over the last 10 to 20 years. And as AI takes the more rote, easier, dopamine producing layer of work, those people don't know what the fuck to do. And so I feel like part of the resistance is like, do we all actually want that space opened up? Because there are people who I think are loath to admit that they don't know what to do with it.
A
No, absolutely no. We have a lot of people in the knowledge economy that make a good living passing information from one silo to the next. Right? Or I call these the cut and paste jobs and the lookup jobs and the synthesizing jobs and the reporting up jobs. I think this is what AI is kind of revealing. There's a lot of knowledge work that is repetitive, kind of mind numbingly boring, but quite lucrative if you can get the job, especially in a big tech company or just a big company in general where you get decent health benefits and, you know, kind of other perks. So yeah, why not stay in that job for as long as you can? And what would I do with that extra time? I think that's right. Listen, AI is truth serum for organizations and AI reveals a lot. When you come to apply AI to an organization, you'll see the lack of growth mindset, you'll see the lack of organizational coherence, you'll see a lot of busy work, you'll see the undocumented workflows, you'll see a lot of people working, but not in any synchronized way. So we need to go and try to automate that. With AI, it doesn't work because you're kind of slapping AI on already. A broken workflow that has too many people who are unclear exactly what they're doing. I admit this is why the worst run organizations will just be worse. With AI, it's not going to help them.
B
And the best run, they'll just be bad, but faster.
A
Yeah, they'll be. Exactly. They'll just be bad and wasting more money now on the AI as well. Right. And the best run companies, I think, get this strategic advantage if they can figure it out and deploy it successfully. So, you know, my model is going to be as an investor, invest in the best companies in the category. If you think they're committed to transforming to become an AI first or AI enabled organization, they're probably going to create more distance from their competitors because they're already well run and they'll have a Much easier job of applying the AI kind of into their product services and into their internal organizations. And for the rest of them they're going to fumble around just more.
B
Yeah, absolutely. What do you think actually moves an organization from those disconnected efforts to something that looks like large scale transformation? Like what are the non negotiable ingredients for enterprises to shift?
A
It's a long list, so let's try to make a short list.
B
Cool.
A
The first is the why of AI and AI manifesto. Like why are we doing this and how will we do it? And I don't mean a governance policy. I mean all, all governance does or sort of AI policies do is actually suppress usage. So you want sort of the, the purpose, why are we doing this and in what way will we do it? What's sort of the culture around our AI? Are we going to celebrate it? Are we going to share our wins and losses? Are we going to consider it not cheating but actually working smart to be using AI? So I think one non negotiable is you need a why of AI with an AI manifesto. You need to be public about it and sort of publish it to customers, to investors, and obviously to employees first. The next thing you need is give your employees good AI and pay for it and give it to everyone. I'm blown away by AI leaders who want to give AI only to some part of the organization. Like the best AI to some people, not to others. Makes no sense. Everybody needs to get a great AI and great AI means the latest version of whatever enterprise AI that you know you want to buy for your internal ecosystem. So give everyone a great AI. Third is constant coaching or training. This idea that you're going to sort of do an AI week or a lunch and learn or like a bunch of workshops and be done and employees will figure out how to use these tools. Again, very naive. It feels like, you know, that's what you would have done 10 years ago when you deployed your new ERP. This is not software. These technologies are really exciting and terrifying at the same time. And their capabilities are changing all the time. So you are going to have to kind of constantly coach and help people figure out how to use these tools. Finally, you have to help them find their use cases. It's not enough to learn how to prompt. You need to get people faster to value. And that means in your specific job, in your specific division and role and country, you know, what are the use cases that are going to give you some value as an individual employee and get to that knowledge and that applicability. Much faster. It's taking way too long. Even after you've trained employees like how to prompt, you've got to get them faster to their use case. Finally, it's what you said again at the top of the podcast. What's going to have to come next after that is help them. And by the way, AI can help us do this. What will employees do with this new time? It's one, it's one thing to save them a bunch of time, that's great. But as a manager, as a leader in particular, your job, and I see very few leaders doing this and managers doing this is your job, is to come up with the next sort of set of work, you know, and tasks that are interesting and valuable to basically get your team engaged in that work. Like, what's going to come next is there's got to be new work that is interesting and higher value and the manager has to come up with that. I think that we've got to also fill in these gaps a year from now, which is okay, we got 10 hours a week back. What is more interesting and strategically valuable work you can do for the organization and for yourself? Right. So those to me are some, some of the non negotiables.
B
Well, and your last point to me is such an interesting one because one of the top three presenting problems that leadership teams come to us with is an inability to prioritize. They can't actually get after more strategic initiatives with the current resources that they have for the most part. And this theoretically is a way to create enough slack to get after the things in the maybe someday column. And yet to your point, like, I don't hear that narrative very much, which is like, this is what 2026 looks like and if we can find 15% of Slack, here's what sort of like the dual transformation play is that we would love to be able to get after. Let's start dreaming about that now because almost certainly we'll have capacity to do it later. Like, I just don't sort of hear that logic.
A
Yeah, but isn't that because of what you just said earlier around, like, there's been that atropying of those strategic sort of skills and analysis? Because that to me sounds like strategy. Right. And for sure, I think most of us have our heads down executing what's in front of us. And we like the reliability of that. We like the predictability of that. We like, you know, we can manage our time because we know what that kind of work is versus this strategic work. It's tougher. That kind of work, it's more uncertain, it asks more questions than it answers. Most people want answers, not questions. So, yeah, I think that's right. This is why startups can apply these kinds of technologies so much easier and so much faster. One of them is they're just small companies, they're small organizations. So the change management challenge is not as great as obviously in a larger organization. I think the second is every day is a strategic challenge in a startup. And so you're always asking these kinds of questions and you're always saying, well, if I had an extra hour, I'd probably do that because I've got this kind of list of burning to do things because I'm a, you know, I'm a startup kind of fighting for relevance and survival. So I'm not, I'm not usually short of important to do. And if I free up time, I can go to that list and, you know, get on with it. Right. Whereas in big companies, that sort of imperative or that level of intensity has just been, you know, dissipated over the years and over the size of the organization and over the, in the amount of resources the organization has. And they lack that edge, that sharpness around. How would we use our time right if we, if we got some of it back, how would we expand margins? I, I think we only should walk around with like five questions and just all of us keep those questions top of mind when we work. You know, growth margin, kind of organizational health, you know, whatever is relevant to you. May not be five, it might be only two or three. But I think those are the questions we always should be asking ourselves at any moment. And they're more strategic questions, right? They're about value creation.
B
Yeah. It's interesting to your point about startups also, because in that space, when teams are still looking for PMF and they're still really resource constrained, there's also just not the same preciousness about, oh, is this technology going to fill in this form for me? Then what am I going to do on Tuesday morning? Which is form filling time. Whereas I think in a lot of large organizations, one of the things that I find really challenging is in very large organizations you have a lot of people that are insulated from the market and the customer and frankly the financials by hierarchy. And they're just like, I don't really know how the money goes here. Like, I don't really know how it goes from the street through the org chart into my bank account. And frankly, I don't really care. And in a startup, like you just don't have that luxury. Everybody sort of has to be paying pretty close attention to the value creation engine.
A
That's right. Everybody in a startup knows the levers of the business. You're right. And in large companies most people can't explain those levers financial or operational. They just, they don't see that big enough picture and, or they don't care to again, they sort of want to get their job done. Also, startups don't care about the risks with AI. There's all kinds of risks with AI, including just tactical risks like hallucinations, getting a wrong answer. The reality is the gain outweighs the risk. And if you're in a large organization, everything's about risk management. We want to manage away risk so that we're not at personal risk in terms of our employment. And of course we don't want to put the organization at risk. That's why Google didn't launch Gemini or didn't basically launch an LLM. They sat on it because of the risk. They sat on the technology which they developed because all they could see was risk. They couldn't see any upside. And it just so interesting, you know, it took, it took OpenAI and ChatGPT and a billion users to really awaken them from their slumber. And it seems like they've responded. Right. Gemini 3.0 sure looks legit. And the numbers in terms of user numbers are starting to be impressive. I think next year, this is Google's year next year. I think Sundar looks like a genius by the end of next year and they'll have a billion active AI users. They were asleep at the wheel and now they've woken up.
B
Interesting, interesting. I want to go back to what you said about risk and sort of large companies being focused on risk mitigation or risk elimination because I see the way in which large companies value a lot of things that AI doesn't give a shit about. And one of those things is it's not actually risk mitigation, it's the performance of risk mitigation. It's the overwrought compliance, it's the 10 ton governance model that doesn't make any fucking sense to get anything done. It's all of the rules based culture and our business for the last 10 years has been really trying to get at ways of working. But we've always coached orgs to do that through the lens of experimentation. And large organizations do not like experimentation. They don't like that word, they think it means chaos, they feel like it means mess and vulnerability and Looking bad and blah, blah, blah. When I think about the risk thing, when I think about the experimentation thing, when I think about silos, I'm just like, these are things that AI is not going to honor. It is not going to honor your stupid org chart that nobody looks AT or the 82 approval chain process that makes no sense for shipping an update to your customer. And so I'm just curious, like, obviously you're talking to a lot of leaders specifically about their vision for AI. How are they talking about how they're going to change their companies? Because this stuff is like in the DNA of these organizations.
A
Yeah. I don't think most of them have any clue about what's about to happen and I don't think they think yet about it as changing the organization or I think they think about this as a productivity software and therefore they really think about this as a path to, you know, hold headcount flat or reduce headcount in order to grow and therefore, and grow earnings at the same time. I think only, only a few are really contemplating the amount of change that might need to happen for the organization to leverage these technologies. They don't, they don't really understand the technologies in terms of how powerful they are. How, to your point, like, AI has no boundaries.
B
Right.
A
None. Right. Crosses everything, crosses countries, crosses language, crosses time, you know, crosses function and capabilities and so on. I just, I don't think most executives realize this really and what it really means for their organization. Again, we're just getting started here and I think what will have to happen is more bankruptcies and, or sort of companies that are really impaired because they did not respond or sort of mitigate the risk from AI. We've seen a few, you know, everybody talks about chegg, but you know, we need to see frankly more of them to kind of wake up management teams and make them realize that there's going to be a real risk here unless you really appreciate these capabilities and integrate them into your products and services and into kind of the way you work. But you know, at the end of the day, we're motivated by compensation a lot of the time. And startups see gain and as yet, incumbents aren't seeing much downside. Not yet incumbents, existing firms, legacy firms, will see less of the gain likely because they can't move to that gain that quickly. They're going to have to see more risk. What woke up Google was risk from OpenAI and ChatGPT to their monopoly on search. And then all of a sudden everything got to be mobilized and Google started to make progress with Gemini. That has to happen in industries and I think management teams need a wakeup call and it's going to be a competitor going bankrupt or a competitor reporting 2, 3, 4, 5/4 of trending down revenue. If you're in the media business, if you're in the agency and creative services business, if you're in the entertainment business, we're going to start to see some of these impacts on these businesses, real impacts on their kind of revenue. And bottom line, when that happens, I think management teams start to take things a little more seriously. Until then, I don't know, I'm not so confident. Which makes it hard, by the way, as you know, I'm sure these are some of your clients. Makes it hard to be the AI sort of, you know, evangelist inside a large organization, right? Absolutely. If you're not getting the resources, if you're not getting the traction with the management team, if you're getting lip service, you know, you might have actually the most buy in from the CEO who's like, hey, let's do this. And everybody else is like, I don't know. Right. So yeah, I have a lot of empathy right now for heads of AI, whether it's actually their job title or they're the ones that sort of volunteered to do it as a side hustle, even inside the organization because they're probably not getting a lot of love and it's hard work. It's a real slog.
B
Yeah. I mean, a lot of the behavior I observe, it is often the CEO who's sending the articles, bringing AI up in meetings, posing provocative questions and then it's kind of like, anyway, back to business, you know, it's like it's not the work to be done, which is significant design work and also education and also experimentation. It like, it doesn't live in the strategy and it doesn't live in the operating rhythm and it does live in anybody's role except for the poor head of AI Transformation who's basically screwed. And like, it's just not sort of in the fabric of companies yet.
A
No. And I think, I think that it won't be for a while. And again, there needs to be some really significant motivation or catalyzing moment where the organization has to agree to start to do this kind of work. And I think that usually comes from economic risk.
B
Yeah.
A
Outside. Right. Meaning a competitor or a customer or a customer says, no, we're no longer going to be a customer because we can do this without you and we can do with AI. I Call this jumping capability boundaries. What I love about AI for individuals is it makes individuals sort of more enabled to jump those capability boundaries. Right. If you're not a financial analyst, you can kind of be a decent financial analyst with AI, or, or you can do a little bit of your own coding now and so on. Right. So I think this idea of jumping capability boundaries is good for people. It's going to be tough for organizations because customers and suppliers will begin to jump those capability boundaries with AI and potentially become your competitors or just stop being customers. So again, I think we're going to need to see more of that or organizations are going to need to see more of that to give them the kind of motivation they need to do this work because it is hard work and it won't be accomplished by buying more AI because it's the easiest thing to do. If you're a CEO and says, I want to be all in on AI, what's the first thing you do? You just go buy a bunch of AI. And that's easy because there's a ton of great AI out there, a lot of great applications, and all kinds of startups and vendors. And for your CTO or whoever's buying the software, it's easy, cancel some old SaaS contracts because no one's using them anyway. Right. We're overpaying for all this software. So cancel a few SaaS contracts and then go buy some AI. So we routinely work with organizations that have three, three to seven to nine different AIs they've bought and deployed. Deployed Air quotation, you know, to the org. No one's using any of it.
B
Well, it feels like you can, you can say you did something.
A
Yeah, you did something.
B
If you have a receipt.
A
Yeah. And you're spending money. Exactly.
B
Like, hey, right, you're investing, you're investing, you're investing something.
A
Absolutely. You do a couple trainings and you do like an AI week or whatever, or lunch and learn. No, listen, a few. The good news is there are some companies that are, you know, getting way past that and they're getting into the 60, 70, 80% sort of adoption weekly active usage companies like Moderna or Zapier or Box, you know, but it takes that level of commitment and investment that we're talking about, I think, to make that happen.
B
Yeah. One of the things that you said before, which I just, I hear a lot, is that the point that a lot of executives are steering toward has something to do with productivity and that that is sort of like the first principle behind the AI transformation. Work is increase in productivity. I think that's not a good first principle. And I'm curious if you're king of whatever large, large organization and you're like, hey, my first principle around becoming an AI led organization is X, what is it?
A
It's cut and create. That's what I would do. So I, Yeah, I share somewhat your point of view, I think, which is, I think this is an efficiency and a growth play or strategy. I mean, that's. Yep, we know that certainly customer service, and there's some obvious areas where that's already happening, but I think we should think about is cut. What you don't need to do anymore in order to free up the bandwidth that we talked about and the resources in the capital in order to create. And with create, I mean, that's the fun part, right? Let's create new products and services, let's create new business models and then eventually, hopefully we'll be creating new jobs. Right. And new revenue for the organization. So I think you should do both. You can do them in parallel or in a serial way. I think if you're going to start with cut, do it fast. All the anxiety in AI is in that first phase, the cut phase. That's what we just talked about. That's where employees are not sure, understandably. Right. So I think we got to get through that cut phase as fast as you can and get to the fun part. And the fun part's great, which is how do you reinvent your products and services and your business model? And the business model part's really important. I think a lot of incumbents think about, you know, making new or improving their products and services with AI. They absolutely should be doing that. But be careful if you don't also focus on your business model, because I think the AI native startups, they will do both. They'll make a better product and service, they might make a better business model. And once the customer sees a better business model, they love it. They're not gonna go back. And so I do think a lot of disruption is not necessarily because of product disruption in terms of someone having a better mousetrap. It's the better mousetrap combined with a better business model for the customer.
B
That's cool. You know, it brings up something else that I wanted to ask you about. I'm a big fan of this book called Dual Transformation. Do you know this book?
A
I don't know it. So same one.
B
Okay. It's good. The crux of it is this essentially in order to do real transformation, effect one has to do the transformation in their core business at the same time as they build the new business alongside. Even understanding that the new business may ultimately cannibalize the core business. And my sense is that's gonna happen a lot with AI. Like, even as we automate and find efficiency and do things better and reduce headcount and provide about a more seamless customer experience in our core business, as we build alongside, we are going to lose some of that core business. And the point of dual transformation is plan for that and move into the new house over here when the paint's dry, you know, rather than trying to keep this thing working. First of all, I'm curious, like, just your take on that. And second of all, I have in my show notes from Jack that you guys have done this at section.
A
We have, yes. First. First of all, dual transformation sounds absolutely right. That is the strategy, I think, to survive and or thrive in these moments, which is, yeah, you need the core business. I mean, that's providing the cash and, you know, the revenue and all the people. Right. So you need that and you want to make it more efficient and optimize it, as you said. And at the same time, can you do the skunk works or the. Can you start the startup inside the big company to get ready to disrupt yourself? Most people don't want to do that, though. Like Rodney. I mean, it's the right strategy. I just don't see it happen very often, very effectively. I'm not sure why. My guess is it's a combination of factors, including you're not that serious about the startup within the big company. You are for the first year or two, but probably not in terms of the long term. You may not really give it all that it needs. You may not be able to get the people. I think so much of creating a startup is about that founding team and that getting the right people to build whatever it is you want to build that's going to disrupt what you already have. I think it's hard to get those people, unless it's a real startup with real economic upside, like the crazy people that want to do that kind of disruption and that kind of. That work, that amount of effort over typically three to seven years at least, to build something that's going to be truly disruptive to others. They need to see significant economic upside and the corporate parent doesn't want to provide that much upside. And I think it's way out of.
B
Whack with the rest of their incentives.
A
With incentives and their risk. And you got to tell Wall street about it. And this is Mary Barra, General Motors. You got to give her credit, right? She funded, bought Cruise and then funded it and then realized it was just dog shit that didn't work. The good news is she's kept her job. Some boards, you know, would have fired her by now, but she's a really effective CEO and she's been able to kind of withstand that failure. But that's a good example. I mean, that's not for the faint of heart. Most, most of my friends, if they're my age or the friends, friends of mine that are my age because I'm old, are like, I don't want to deal with this. I'm like, I'm gonna head to the go. This AI stuff, this like dual transformation, it's like that. I'm gonna, I'm just gonna go golf.
B
I'm on the back nine. I'm not doing it.
A
Back nine. I'm not doing it. I'll let the Gen Z figure this out. Whoever the hell it is, whatever that generation is. That sounds like the right strategy. It's really hard to do. I think what usually happens is they buy, they don't really self disrupt. They wait till it's too late and overpay and buy it and try to bring it in. I don't think the track record for that's very good either.
B
No. And then they ruin the companies they buy.
A
Yeah, totally, right. And they don't really turn the corner or sort of make the transformation happen. And we'll see this when the bubble burst. This is going to be air cover for a lot of executives, CEOs and so on to say, well, I told you so. It's not that, you know, this AI things, you know, it was all hyped up and we don't need to focus on it as much. We don't need to invest as much. We can just kind of put that stuff a little bit on the back burner now. We're absolutely going to see that next year. And the reality is it's going to burst for investors. But the AI's bubble was not bursting in terms of these capabilities. And the people who are serious about it will just double down when the bubble bursts. All the bozos will get cleared out. All those AI influencers are in my LinkedIn feed every day. They're going to be gone. That'd be good for my LinkedIn feed at least, you know, and like the people who really give a shit about this and want to work at it, they're going to Keep their heads down and just kind of plug away and emerge with a great startup or maybe transform their own organization. That's what Jeff Bezos did when the dot com e commerce bubble burst. Right? I mean people don't remember. His stock dropped 95%.
B
Right.
A
Wall Street Journal called him, you know, Amazon bomb. Like he was ridiculed. And they had to lay off people and kind of hunker down. And then they just kept investing. And Nordstrom.com and Walmart.com, all those guys use as an excuse to basically deprioritize online shopping and E commerce. And Amazon used it really as a way to build market share for 10 years. And so the same thing's going to happen here when this AI bubble bursts, which it surely will in the next year or 18 months.
B
So tell me about section disrupting itself.
A
Well, two parts of the story. We had to pivot first and then disrupt ourselves second.
B
I don't know the pivot part. I don't know. We only know the disruption part.
A
Yeah, we were an online business school, so we basically taught strategy to executives online. That was a good business. And then it was only an okay business. And then I played with ChatGPT plus on February 1, 2023 and I realized, well, this was the new accelerant, both for people and for companies. It would be AI, not better strategy. I mean clearly it's both. But AI just struck me as a better bet. So I kind of pivoted the whole business that day.
B
Dang.
A
Yeah. Like any other leader, it's hard to do a pivot because everybody was like what? And I got a day job, I'm busy and how's this going to impact? And will you lay me off? It took a year and we were a small, very small organization under 30 employees. So that was the pivot. The this sort of self disruption is. We've built an AI coach. So the business has been to use workshops both online and in person, strategy sessions, hackathons, all those kind of manual, effective but manual techniques to kind of make the change happen for our clients, for our enterprise clients. Now we're doing all of that inside of intelligence software. It's called Prof. AI. You know, it's software. It's AI software. It's an application. And this is going to be the way we get our organizations trained, constantly trained and retrained. Help them find their use cases, help them find what to do with their new time. Do they have, what's the more strategic work they can do. All that's going to Be coached through an AI assistant called Prof. AI and we just launched it in April. We have 100, maybe 125 enterprise customers now. It's growing like crazy. It was the right move, but we were giving up good high margin revenue and replacing it with less. So. Yeah, scary moment. But fun, you know, but fun. And if I don't disrupt ourselves, then someone else will and then we'll be dead. And that's worse.
B
Yeah. And yet, as you know, that's not a stance that especially executives of publicly traded companies often take because they're stewards.
A
Of much bigger assets and they are paid to, if it's publicly traded, manage the stock price.
B
Yep, absolutely. You used a word that we use a lot here. So we talk a lot about the industrial age and the assembly line, sort of forging the role of manager for the first time in the discipline of management science. And then the information age and like sort of the GE paradigm, forging the role of leader and like Rockstar executive leadership status being the new hotness of the 80s and early 90s. And we talk about this age, the intelligence age, as requiring a shift from leadership to stewardship. And the way that we conceive of stewardship is actually thinking of an organization as a living complex system and one's role in it being to steward its survival, steward its adaptation. No different than stewarding a family trust or a land conservancy. It's not about owning it and trying to control it, it's about shaping it, trying to keep it alive, essentially. And I've heard you say the steward word a couple of times too. I'm curious, like first of all, I'm curious just if that lands with you or if you think I'm full of shit. And second of all, I'm curious like what posture you'd like to see leaders taking now.
A
So I don't think you're full of shit or only partially. I think it's like, I think it's both, Rodney. I think we want to steward and there'll be moments of leadership required, you know, with deep conviction. Yeah, maybe there's no data. So a lot of intuition, a lot of judgment. The stewardship doesn't 100% land with me. It lands in part. But I would say it's that combined with old fashioned leadership, you know, like kick ass, take some names, make some tough calls and get the organization or team pointed in the right direction. In the absence of data, as I said, in the absence of consensus even, I think there are moment, there are moments where I Want leaders to step up more and really advocate for a point of view or a decision and own it and really take that risk both reputationally and professionally, because that's what those kinds of decisions are. And when we try to manage out that risk to ourselves, I think we manage out the upside of that decision.
B
Yeah, I agree with you. We often say consensus is a race to the bottom. And I think one of the things that neither of us hit on but I think is in both of our desires around this posture is like, I want to see leaders, stewards in relationship with the broader environment. Like, I want to see the leaders of companies paying a lot of attention to what is going on out there and to what is coming and to have more of an interdisciplinary approach to leadership. That's not just like, well, what am I doing and what is my competitor doing and what did we promise to Wall street this quarter? But, like, what is happening out there? And I don't see a lot of airtime given to that sort of future casting and external orientation. See a lot of airtime to, like, today's crisis and firefighting.
A
Yeah. No, listen, I love that, Rodney, and I think you're right. Meaning especially in this moment, in this age of AI, because there's so much externalities or impact, like AI is just going to shake so much stuff up in so many good and not good ways and intended and unintended ways. And I would agree with you. We want leaders who are at least trying to wrap their head around some of this stuff. First of all, for their own organization. How will it impact. I'm stunned about how many leaders don't have an answer when employees ask them, well, what will be the impact on jobs at my company? Like, that's. That's a basic question. If you're going to be all in on AI, you should have an answer. Yeah, about that. Right. What's the impact on the organization and why are we doing this? And so on. And then you need to get outside of that. To your point. Think about AI in our schools, AI in terms of sort of the health of our society. It drives me crazy that people want to buy AI from Meta or Xai. These are unsafe AIs. I mean, we won't pay for any AI from those companies for our own employees use. And certainly when I'm asked, I advise every CEO, you should not be buying that AI. You should not be paying for it. Have an opinion about AI safety. Have an opinion about how do we align AIs to our benefit, to humanity. And you do Have a couple ways to influence that. One way is with your wallet, pay for responsible AI and deploy responsible AI. And that to me is, you know, AI from companies like Open AI, Anthropic, you know, Google, Microsoft and not some of these others. So yeah, like have a point of view on that. Like what is your expectation of the AI company that you're buying your AI from around their safety and alignment investment to make good AIs. Yeah, so few executives even think about it. It's crazy.
B
Totally agree. And I think that, you know, one of the, one of the things that I've heard a couple of very famous investors say that I'm like, I don't like a lot of what you guys say, but I like it when you say this is systems thinking is the skill for leaders to learn. And as someone who does that professionally, it's not a discipline that is well hewn out there in general. But I think to your point, that is what allows you to know the levers of your business. As you said, understand the levers in the larger environment, have a real practice of first principles thinking, know how to develop a rubric for what safe AI is, what values aligned AI is, know how to charter AI's role in a team or an organization and determine what authority to give it. Like this is like basic systems thinking shit. That is not really part of the conversation of most teams.
A
Yeah, that's, yeah, that's because we're, I think in part, Rodney, we're at peak capitalism right now. Yeah, we are. You know, and your frustration that leaders aren't thinking about some of these bigger challenges issues or kind of the bigger picture and really are sort of heads down, focused on either the next crisis or the next quarter or frankly the next earnings call. It's because we're at peak capitalism. You know, we have, we have a one CEO Elon Musk, where the board has approved a trillion dollar, a potential trillion dollar pay package. I mean, that is peak capitalism. We have CEOs who routinely boast about shrinking the size of their organization in order to drive earnings. Like boasting about it. I get it, you have to do that in terms of becoming a more efficient organization. That's the cut part. But also, why aren't you boasting about new products and services? Why aren't you boasting about new jobs created, new value being generated by your organization for the communities in which you do business and so on? But right now we have CEOs mostly boasting about layoffs. So I think we're at peak capitalism. And I think that something's going to break and change.
B
That was going to be my next question. I heard someone on a stage the other day also say knowledge work is at peak inefficiency. And now you, Greg, are saying we are peak capitalism. Those things don't feel like a recipe for human flourishing. How do you think this goes? Like, I think most of us have the felt sense that what is happening right now is not sustainable. What do you think breaks us out of this moment? Is it a recession? Like, what is it?
A
Yeah, I don't know. I think it's probably some combination of Gen Z resistance.
B
Okay.
A
Anger, anxiety, resistance. You know, their early career prospects seem to be diminishing. The data is still, I'd say, unclear, but we seem to be getting some data. There's that recent studies, you know, out of Stanford in terms of early career jobs, particularly in software engineering and customer service, where early career prospects were reduced by AI. So we'll see if they're the canaries in the coal mine, then I think really that's the generation that's going to suffer the most. And I think we'll hear from them first. So I think it's a mix of the current system straining under the fact that it's become so bifurcated, so extreme in terms of some are doing so well and some are just struggling to make ends meet, even in the knowledge economy. And I think also, and maybe the same generation finding different ways to make a living, realizing that that path is not going to work for them. I'm optimistic but pragmatic about it, that maybe we'll end up creating a lot of new and different jobs, maybe as solo entrepreneurs, you know, that kind of work, maybe an explosion of entrepreneurship kind of inside the knowledge economy in terms of, you know, small team companies that can generate a good living for the people that work there. That's what I'm hoping. I think large organizations are shrinking, and I think that's just obvious and inevitable. I think we are moving to an economy where there'll be super companies, and those are not great.
B
Tell me about that.
A
Yeah, I think super companies are ones that are different on the inside. They're smaller, they have super leaders, they have super employees, and they have a whole bunch of agents and a giant cable going out the back to a bunch of AI. And I think CEOs want super companies. I think Wall Street's going to reward super companies with outsized valuations. They're going to be very efficient. They're going to likely grow faster than others and using less people. So I think we are entering this next era of capitalism, which is super companies. And I think they'll be in every industry. We'll have super law firms that just behave and operate differently than traditional law firms. They'll be more profitable, they'll likely be leaner, they'll likely serve their customers better in terms of what customers actually want and how they pay for it. So I think in almost every industry we're going to have super companies. And of course tech will have a lot of them. It's going to be great if you work in a super company, but so few of us can because there's only so many of them. So we're going to have to figure out what to do. And I think that's going to be not working for anyone or working for ourselves in some way. That's what I'm optimistic about. I'm not sure how it looks yet, but I feel like that's what could happen here in a good way.
B
That's really cool. My last question for you was going to be what you think the future of organizations is. And you answered it really well. I'll just add to the soup a couple of things that I don't think are new and I don't think are AI related, and then I'll ask you what you think about that. So, you know, the gig economy has been so prolific, you know, that something like 45% of Americans have some kind of side hustle and sort of this, this like idea of a portfolio career, like a lot of this is just in the last 10 to 15 years that this has become a thing. At the same time, as to your point, the lowest paid employees who are still full time W2 employees in companies cannot make a living and support themselves on a full time salary. So it's born of necessity and also a lot of the availability of this gig work.
A
Yep.
B
I also think that like the sort of membranes around organizations must become necessarily more porous. The idea of like I'm an employee and you're a contractor and he's a vendor and that's an agent, I'm like, who fucking cares? Like what does it take to get a piece of meaningful work done? I feel like that's been true. And also large organizations have really resisted the idea of more of a open talent marketplace in order to achieve important things. So like, if we take all of that together and we take also the idea of super companies, I guess, like what do you think the role is for companies to play in society in the next five years as some of these truths become self evident.
A
Such a great question. It's almost too big a question. I don't. I don't have a good answer here, Rodney. I tell you what I consider my role as CEO of Section and what our company is all about. Create a great product and service that customers want to pay for and do that with good margin and offer good employment to those that we can and generate return for our investors. Do that in a ethical, authentic and fair way. Try to enjoy doing it while we're doing it. Withstand the competitive pressures and all the stress that come with that. You know, I don't know if that's changed from 30 years ago when I started my first company. It feels like it's the same thing that I've tried to do at every startup and just do it in a way that when we look back on it, we're proud of it.
B
You know what your answer makes me think of, and it goes back to the very first part of this conversation, is as more and more work does not require human effort or capacity, we're gonna need something to do. And maybe there is a world in which organizations become the organizing way for us to still have human experience and contribute something that feels meaningful to someone somehow. Because I don't think to your point, we're all just gonna like write poetry and smoke weed all day and nobody wants to spend that much time with their families. Like, we're gonna need something else to do with our brains, even if it's not completely economically necessary the way that it is now.
A
Yeah, I think that's right, Rodney. Here's the good news. There are so many big problems that we need a lot of human ingenuity and just frankly, talent showing up. Not just the sort of the solution, but the people to actually deliver the solution. We're. We're short teachers. I think it's 200,000 teachers short just in the high school system in the United States. We're short healthcare workers and nurses and like we have labor shortages everywhere and sort of expertise shortages everywhere. So I think the good news here is that we've got some big problems that we could apply. People who've got some experience. And even though the experience AI can probably help out with that. Just show up right with, with head and heart. Head, heart and hands. And you're right. Maybe that is one role of an organization in the future, which is to organize us to go tackle some of these challenges.
B
That'd be pretty cool. This seems like a pretty good place for us to wrap it up. Greg, I could not have enjoyed talking to you more. Where can our listeners learn about you and about section?
A
Go to GregShove.com G-R-E G S H-O-V E.com and all my links are there. I've started a podcast on newsletter called AI Truth Serum. So if there's one thing I would like people to do is check that out and see if you decide if it's worth subscribing.
B
Awesome. We will link to that in our show. Notes for the Listeners. We are always looking for new topics for the show, so if you have an organizational pattern that you're having trouble changing, shoot us a note@podcasttheready.com this show is engineered by Taylor Marvit and produced by Jack Van Amberg. At Work with the READY is created by the ready where we help organizations around the world change the way they work. Thank you so much for listening.
Host: Rodney Evans
Guest: Greg Shove, CEO of Section
Date: January 12, 2026
This episode explores the persistent challenges and hidden opportunities of adopting AI at scale within organizations. Rodney Evans interviews Greg Shove, a pioneering AI workforce transformation leader and CEO of Section, renowned for training thousands in practical AI adaptation. Together, they dissect why most enterprise AI strategies stumble, how workforce dynamics and organizational culture shape adoption, and what forward-thinking companies are doing to genuinely capture value from AI. They emphasize the necessity of a deeper, more honest conversation about work redesign in the intelligence age.
“I wish that AI CEOs…would stop saying we should slow down and really contemplate the impacts...because…they don’t have any intention of slowing down. So just be straight up with us, we’ll figure it out.” — Greg Shove (01:28)
"When the bubble bursts, all the bozos will get cleared out. All those AI influencers in my LinkedIn feed every day—they're gonna be gone." — Greg Shove (00:01, 35:28)
“I have a lot of empathy right now for heads of AI…because they’re probably not getting a lot of love and it’s hard work. It’s a real slog.” — Greg (24:43)
"Most companies deploy [AI] like software and then they're shocked that no one's using it…because they haven't really answered the question…what am I going to be doing after?" — Greg (05:23)
“AI is truth serum for organizations and AI reveals a lot…You'll see the lack of growth mindset, you'll see the lack of organizational coherence, you'll see a lot of busy work, you'll see the undocumented workflows...” — Greg (10:09)
“The stewardship doesn’t 100% land with me. It lands in part. But I would say it’s that combined with old-fashioned leadership…kick ass, take some names, make some tough calls…and own it...” — Greg (39:08)
“As AI takes the more rote, easier, dopamine producing layer of work, those people don’t know what the fuck to do. And so…do we all actually want that space opened up?” — Rodney (09:24)
“In order to do real transformation, one has to do the transformation in their core business at the same time as they build the new business alongside—even understanding that the new business may ultimately cannibalize the core business.” — Rodney (30:19)
“If you’re going to start with cut, do it fast. All the anxiety in AI is in that first phase, the cut phase…get through it…get to the fun part: reinvent your products and services and business model.” — Greg (28:54)
“We are entering this next era of capitalism, which is super companies…inside they’re smaller, they have super leaders, they have super employees, and they have a whole bunch of agents and a giant cable going out the back to a bunch of AI. And I think CEOs want super companies…But so few of us can work in them.” — Greg (46:46)
“Systems thinking is the skill for leaders to learn…That is what allows you to know the levers of your business…understand the levers in the larger environment…” — Rodney (42:54)
| Timestamp | Segment | |-------------|-----------------------------------------------------------------------| | 00:01 | Greg’s take on the coming AI “bubble burst” and influencer fatigue | | 01:28–03:20 | Opening reflections on AI hype, corporate insincerity, and uncertainty| | 03:44–08:58 | Why enterprise AI adoption is so poor; individual vs. org incentives | | 09:56–11:46 | Bureaucracy & “AI as truth serum” for ineffective organizational work | | 12:05–15:04 | Necessary ingredients for true AI transformation | | 15:04–18:46 | Challenges of strategic work design and comparison with startups | | 20:03–21:37 | Risk obsession and bureaucracy block AI progress in large orgs | | 21:37–25:43 | What really catalyzes organizational change (competition, fear, risk) | | 28:27–34:00 | “Cut and create” and the importance of dual transformation | | 35:28–37:24 | Section's pivot and self-disruption with AI | | 37:33–44:39 | Leadership vs. stewardship in the intelligence age | | 44:39–47:52 | “Peak capitalism” and the rise of “super companies” | | 49:39–51:17 | The evolving role for organizations: organizing meaningful human work |
This episode delivers an unflinching, often irreverent critique of why most enterprise AI strategies fail to deliver, and what a more honest, humane, and ambitious approach looks like. Greg Shove and Rodney Evans illustrate that meaningful progress demands more than software purchases and productivity hacks—it requires cultural reckoning, managerial courage, systems thinking, and above all, a vision for reshaping work itself in the intelligence age.
Further resources:
Produced by: The Ready
Contact: podcast@theready.com