
Loading summary
A
Hey everyone. I'm super excited to be sitting down with Kenneth Cukier. Kenneth is Deputy Executive Editor covering AI for the Economist and a New York Times best selling author whose book Big Data A Revolution that Transforms How We Live, Work and Think has sold over 2 million copies. What I love about Kenneth is that he sits at the intersection of business research and journalism, working as a former research fellow at Harvard and Oxford where he led seminars on AI and business. He travels the world exploring what's next in AI and is a seasoned speaker for forums like Ted Davos, NATO, Google and the US State Department. I want to ask him where he sees hype versus reality in AI. What can this technology really do? Who has a chance to get ahead and what does it mean for us as workers, leaders and people? Let's find out.
I'm here with Ken Cukier to talk about all things AI today and the future of AI. So Ken, I mean this is an area that you've been following, whether it's data, algorithms, AI as a journalist for a pretty long time. And you know, we find ourselves in a position now where we've got, you know, a $3 trillion and rising bet on this technology. And so I'm curious, from your perspective, what can this technology do, what can it not do? And how does that impact your outlook on how all this is going to play out?
B
You know, some people say, well, is technology hyped? Is AI hyped? Is AI underhyped? It's a little bit of both, in fact, it's overhyped or hyped simply because there's so much investment, as you pointed out, going into it. $3 trillion over the course of basically four years is exceptional in the history of technology, in any innovation. We've never really seen that before in corporate R and D, just to put a point on it. So it's unprecedented that you're getting that much amount of investment. You could point to the railway mania, but it wasn't as compressed in such a timeframe, however, and that was 1800s. Okay. However, the case for it being underhyped is also fairly strong. So what is it that AI is and what can it do? Let's start with what humans are and, and, and how we make decisions. We're, we're cognitive creatures. We, we learn and we mature and then we actually find some economic value in making lots of decisions. And if we can make repeatable decisions, that's great. But often these are going to be one off decisions. But we're very.
Stunted by our temporally we are going to degrade over time and then die. And we're also stunted in the sense that we only have five senses and we have one brain and we need to sleep at different points. Now, AI can exceed human capacity, cognitive capacity. It was able to. Computers were able to do that from the 1940s. So there's nothing really novel about that. We're still understanding the immense power of that. But the real gem of AI is that it exceeds our ability not only to understand, but to learn new things, to pierce the frontier of knowledge. What I mean by that is the world is very complex. There's a lot of covariance, and we tend to dumb it down and simplify it so that we can understand it. But if we had a mechanism, a tool whereby we could exceed what is known, sensors that can detect things that the human eye and ear cannot, but also the intricate and elaborate nature of how reality works, that far exceeds our ability to suss out and even to contemplate, that's going to be a beneficial win. So I'll give you an example of what I mean by that. By looking at one piece of research from several years ago. Google wanted to identify whether you could look at retina scans, the ordinary sort of scan that you'd get at a shop on a main street for eyeglasses to determine whether you could identify cardiac events, whether a heart attack or cardiac arrest. And sure enough, the algorithm that they created by putting in lots of data and then analyzing it was able to do that, and it was able to identify whether the person was a smoker or not at about 85%. Really interesting. Through the retina alone, they were able to identify the age of the person, plus or minus five years, at about 75%, which is much larger than chance, right? Which is great. And those two features do affect whether someone's going to have a heart attack or not, for fairly obvious reasons, age and whether you're a smoker. But it was able to identify the sex of the individual, the gender, male or female, to around 97% accuracy. And what was stunning about that is that that was never even known in the field of ophthalmology, whether it was possible at all. So AI was able to identify something in the structure of the scans that humans didn't even have a theory for. We wouldn't have known to look for it, although the researchers did try to identify it, and it turns out to be incredibly accurate. That one finding speaks for the broader way in which AI is going to go out through society. First during the classic machine learning problem, which is identifying new items of knowledge, that has been happening before, but we were then able to suss out why. For example, when Go was able to beat the world's best players, the world's best players were able to look at how it made its decisions, and they could reason through and come up with a mental model, a schema of what was going on. But pretty soon, just like with the retina scan, which is just the beginning of something bigger, it's going to exceed our ability to understand what is going on, and we're not even going to be able to reason through how it reached its conclusion, but we will know that it's going to be valid. Now, in the case of 97%, remember, it's not a slam dunk, right? In most instances, it will be able to identify the gender of the person, but in a blurred area of 3%, it will misidentify the gender. What will we do in society when that's misidentifying cancer or the rocket explodes on launch? Those are new questions that we'll have to face as we start using AI more and more.
A
So with all of that in mind and specifically tying it to the societal piece, it feels like people are struggling to get a singular narrative about how this is going to impact their lives and their livelihoods. And for good reason. There's obviously a ton of uncertainty here, and based on everything you've heard, Ken, you know, I'm curious, and I know you're a journalist, so you try to maintain some level of neutrality, but, you know, on a scale from, you know, full excitement to, you know, complete panic, you know, how do you feel looking out over the horizon at some of these new capabilities and, you know, the potential disruptions they can have on our society? And is there anything that you think we, as kind of consumers and also the creators of this technology need to do to make sure that this ends up being a force for good?
B
Well, I guess there's two elements to this. The first one is that nobody really knows what's going to happen. Everyone has a thought and a theory. And often if someone says jobs are going to be eliminated and others say that, no, that's not possible, one side's going to be correct and the other side is going to be wrong. The second element to that is we don't do very well with change because we tend to extrapolate from what we have today and think that tomorrow is going to just look like today. But with that one variable that's changed and that Never happens, right? You've never isolated down to a single variable. When you have a change, everything else changes as well. So let me explain what I mean by that in terms of healthcare. Imagine that the cost of diagnosing an ailment, say cancer, a biopsy, goes from what today might be at cost $1,500 and charged at $7,000 for a simple test to one penny, right? Just like the penny post in England in the 1800s, sending letters you've got a penny scan for because it's image recognition and the data collection. I'll explain to how that works in a minute. So suddenly, when that happens, you, a classical economist would say, all these people are going to be out of work. But then they would look at it from a welfare, consumer welfare standpoint, which is to say the benefit, the value of it to the individual. And they would say, well, it's going to be great for the individual because they're now going to be paying a penny, right? Not $1,000, but it's more subtle still. There's a lot of secondary effects.
Imagine that the data now is collected every time you flush the toilet, and it's just a natural cost of doing business. It's just added to your bill. Now you're identifying some people are going to cancer, some people aren't. And you're identifying new trends signaling the progression of disease. Being able to spot the incidences of cancer far earlier than before because you're learning new knowledge about how the cellular structure changes and the biomarkers for it that identify if something's cancerous or not. Suddenly you're able to prevent cancer. Rather than having someone knock on the door and feel a lump and say, hey, is this a problem? And then it's too late to act so suddenly from a consumer welfare benefit. Everyone is better and they're healthier. There may be as many jobs or more jobs in healthcare, but they're earning money through different ways.
Let's say there's not more money in the healthcare system. Let's say actually if you go from, you know, $1,000 to a penny, that in fact you're paying more for healthcare, for other things, but not in that one domain. Keep in mind that gdp, the way that we measure the strength of the economy, the value of the economy, only measures market transactions. So if something is happening that you used to pay for that you no longer pay for, GDP goes down. That was the great phenomenon between 1995 and 2005, 2015, in the US economy, when they looked at the value of the Internet for the economy. You no longer had to erase and print out a PDF of your blueprints before 4:30 to then race uptown in Manhattan to hand it to FedEx by 5pm pickup to then ship it to California overnight for them to receive it. Like all of that was gone. So the cost of the printer, right. The cost of the taxi and the cost of FedEx and your time, right, Gone. What was the value of the Internet to the US economy in this instance? GDP went down because it was free. Everyone benefited because you could click a button and get it within milliseconds in Santa Barbara, right? So when we think of AI, things will absolutely change. We don't know exactly how, but we have to think. Remember there's going to be second and third order effects, not simply that first order effect. I would look for these changes to be quite profound. I don't think we're going to be out of work, but I think most jobs are. I shouldn't say think most jobs. Almost all jobs will definitely change.
A
If you work in it. InfoTech Research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy, covered. Disaster recovery, covered. Vendor negotiation, covered. Infotech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below. And don't forget to like and subscribe.
So if I'm, you know, interpreting that correctly, you see a huge benefit on the consumer side, whether it's in your work, whether it's for, you know, your health or your personal life in terms of the, the cost of what you're getting or the quality of what you're getting. And as an aside, it sounds like you're not a super big fan of GDP being a particularly accurate measure of wealth is that I, I just came from a conference a colleague of mine was yeah, I just came from. Yeah, go ahead.
B
But I'm not a totally anti GDPist. It's useful in certain domains but not in others. You have to really underscore, you have to recognize its limitations. But I'm definitely pro consumer for how AI works its way through the economy and makes our lives better. It already is. Google searches being an example, translation being a second one. But most people don't realize that a typical cell phone is loaded with AI. For one thing, it's actually managing the power management of the battery because you're going to have 80 apps open because you're never closing them, but they power those down that they predict in a model that you're not going to use, but they actually will keep those alive that you think will be. So they manage the power consumption and get like 20% better battery life just from an artificial, from an AI system operating the background, managing the battery.
A
Right. So when I think about these use cases, and it's no surprise that the notion of in all of these examples is data has kind of come up and needing to have this information about the consumer or about the system. And this is an area that you've been covering for quite a long time now. And so, you know, from your perspective, is this all sort of a continuation of the big data revolution we saw 10 or 15 years ago? You know, are there discrete chapters or is this like a completely, you know, different track that we're on?
B
It is absolutely a continuation. In fact, the term big data was always a shorthand and it was a shorthand for machine learning. And the term machine learning was a shorthand for artificial intelligence. And the reason why they use these other terms is that back in the year 2000 and 2010, you couldn't actually use the term AI because it was considered laughable. The technique took, didn't work because it was actually a different approach. So they had to come up with a different term. And so big data was just like Nobody talks about 802.11 but that's wi fi. That's the WI fi standard. So Big Data became the sort of shorthand for what was going on, that the canonical text, which was written by Russell Stewart and Peter Norvig of Google says it all. This is the textbook that all the entry level, you know, the undergraduate students learning AI have to read to, to advance. And it's called Artificial Intelligence. Colon A Novel approach. Why that subtitle? Why a Novel Approach? Well, the point is that in the past the field grew up by having these almost hand coded instructions. The symbolic mechanism of basically giving it instructions so that it could actually do things. The shift that took place, which was Hinton and Yann LeCun and Yoshio Bengio as those three, the God considered the godfathers of AI, is to invert the process and say, no, no, no, no, no, don't instruct the computer what to do. Give it lots of data and the data will infer what the right answer is from its training data, from its training set. That technique was machine learning that then became deep learning when you had multiple layers. Of course, Hinton gave us back propagation. So it learned from what it had actually learned. So it was kind of a recursive learning that was phenomenal. Then it would go off to the races by 2012 with the GPU. Then it would go off with the around 2015 with generative adversarial networks, where they actually can actually create things using the same model, but to predict. And not to predict, but to generate, of course, the Transformer architecture, 2017, and then 2022 with large language models that have captured everyone's attention. But it all comes back to this basic fundamental question or fundamental method, which is give it data. And the doctrine is more data, better answer.
A
Right, right. That's sort of the corollary of the garbage in, garbage out thing of, you know, the answer is only going to be as good as the data that it gets. And so, you know, the reason I was curious about that is, you know, big data, people have been talking about it for a long time. People have been. And I'm sure you face this in some capacity, but around that time, there were organizations 15 years ago or more that recognized the power of big data and machine learning, and they were collecting reams of data and analyzing reams of data, you know, not just the Googles of the world, but, you know, all sorts of, you know, enterprises. And there were organizations that just kind of missed the boat on that one. Right. That just, you know, didn't necessarily lean into it. And so, you know, it seems like, and I'm curious on your perspective. It seems like a lot of the organizations that were well positioned at that time are now, like, collecting dividends, and they're only becoming more powerful and getting farther ahead in this era. And so my question is, with some of these new tools.
Do you predict we're going to continue to see the same winners as all along and they'll just get farther ahead? Or for organizations that missed the boat the first time, is there an opportunity to kind of, you know, leapfrog some of the traditional methods here to, you know, start to take advantage of these technologies?
B
So it's a big bet. And I don't think we're still in early days of the data revolution. I'm really struck by that. Every now and then I get a knock on the door to give a talk or someone says they like the book by an organization that's just learning about data. That and I, as a sort of an artifact from 15 years ago when I was writing about it, and you think, how are these people still in business? What is going on in the economy? But it takes a long time for these things to seep in we see this in the stock market. Of course there's the phenomenon of the best versus the rest where the best companies. This is the. Eric Britholson, there's been lots of research but Eric Brickholeson of MIT has some of the, actually now of Stanford has some of the best.
Papers on this showing that the companies that are the top in their field, they're not like 15% better or 25% better. They're like three times or eight times better in terms of capital allocation and return on investment and stock market performance than the average of their industry and certainly much more than their laggards. And of course we know with the, the stock market in 2024 and 2025 that if you take out the, the AI levered AI related shares and, and, and big tech shares and look at the stock market as a whole, the stock market is basically plateaued. It's not, it's really flatlined. And all of the gains from most of the markets have come from the AI companies or the technology companies, including defense tech that really pulled out ahead and left the, lifted the whole market. So we're already seeing these incredible gains that we get. However, I think there's a sort of a gravity to a business. Let's think about Walmart. You know, it was 20 years of Amazon when Walmart woke up and it was like we are, we're stuffed unless we do something and they're back and they're doing things. They've got a great brand, they've got a very capillary network. The, the, the, the clicks and bricks model turned out to be a very good one where people can click and collect was very good. People wanted to see it in a showroom to then order it online and then get it so they could touch it and feel it. Amazon can't do that, but a lot of retailers can. So it turns out that there is a more of a constancy and a staying power for companies. The other thing about being a real life company is that you, if you make that investment, you can get the data in a way that new companies cannot. And if you, if you had cared a little bit about data in the past, you can go back to your troves, you have to clean it and it's very expensive but you can actually learn from that data. So I'm actually mesmerized by the number of companies that, that have stayed in the game even though they were so slow out of the gates, but they're now in AI making some of those bets. And of Course there's a whole ecosystem that of service providers that is willing to work with these companies to get them where they need to go. These companies on their own cannot hire AI engineers. But just like the promise of business consulting where you're not going to get the best strategist because you can't afford it for a one and done project. So you amortize that cost over all of Fortune 1000 by choosing McKinsey and choosing BCG and Bain and they host that person who then is allocated to you on a piecemeal basis.
A
So when these people knock on your door and kind of say help, I should have done this 15 years ago, but I'm ready to start today, what do you tell them? What's your advice for how to get started, how to think about this and how to get back in the race, you know, so to speak. Is it just like, oh, you know, hire, hire a big consultancy and you're on your way or is, you know, what's the guidance you give them?
B
Yeah, the guidance is basically so first, you know, I'm a, I am, as you pointed out, I am a journalist, I'm not a consultant. And so this is sort of a big picture take. I don't want to go into the weeds of the company but what I would do, because that's just not where I play. But what I would say is, okay, you're behind, you know that. Just be honest with yourself and your team if that's the case. Secondly, this is about a culture. It's about having a data mindset. It's about seeing the world and all things in it through the lens of data and not, and not what a vendor says. This is what we can collect and this is what we can analyze. Bring your humanity to the table and bring your ground truth and your deep understanding of the customer and the customer's pain points and what you do and the values that emanate from you that you want to put into this world. And then you're going to need to collect probably a very different data set and you're probably going to need a more bespoke system that says what was right for an E commerce merchant in this country or at this time or in this sector and is not right for you. So I mean I know of one company that wanted to take a look at its, you know, it was a content company that wanted to look at its, its least performing.
Articles and just, and snip them off. It's there. The least performing articles are also the most expensive to do. As you can imagine, because they're going to be a niche article, right. If it's in finance, it'd be, you know, ex something weird about, you know, bank know sustain, you know, bank performance and living will rates related to stress testing it according to the Basel banking requirements. Right. Things that are just ridiculous. I said, well wait a minute, why don't you attach a lifetime customer value, an LTV metric to every single article and identify on a per article basis where the value of the article is relative to the subscribers on a lifetime basis. What you might find is this sort of blockbuster effect where everybody likes the sexy article about Facebook launching a new cryptocurrency, right? Because it's just sort of what everyone is sort of interested in. It's in the air. But those people come and go off your rolls. That's not that important. You might find that those articles that get very little traffic, that are very costly to do is where you have your most prestigious and most durable subscribers. In fact, you don't want to get rid of them. In fact, you don't want to definitely have more of them. You just want to keep that as it is. The success of a company often is as, as much by luck as by intention. Most serious business people understand that they are inheritors of something and you don't really know what works and what doesn't. It's always, you know, every day is a new product market fit because the world changes and people's tastes change. So I think you want to be, this is, you want to be very cautious in what you do. I should say there's a political philosophy behind this and it's called, it's by Edmund BURKE from the 1800s, it's called conservatism where it's not to prevent progress but you, you reform by doing the most, you know, careful and circumscribed thing. And I think that that's what I would argue that they, that businesses need to do apply data, but be very careful and, and don't just sort of go with abandon thinking that you can do absolutely everything because you want to be very judicious how you apply your data.
A
Well and you know, hearing you say that, it feels like the same principle probably applies to, you know, any adoption of new AI technologies as well. And are you finding that organizations that there's a temptation to just, you know, throw AI at everything? And are people being diligent enough in that conservativism approach of where are the risks, what are the kind of crown jewels and how do we make sure we're, you know, taking the right approach.
B
So Jeff, that's actually a slightly different. Interestingly, and this is a great question because it's a different, a different question, a different answer. The thing about data, the data shtick, if you will, that's essentially a century long.
Endeavor, right? It was, it's basically the foundation of statistics and the sciences, which was, dates from the late 1800s really in, in a substantial way to today. So when we, when I'm asking business people to apply data to their business problems, I'm basically saying adopt this technology or this technique that scientists use, you know, in, at scale between 1930 and in 1970 when the, when, when the sciences became very mathematized. And you should just do it as well because now the tools are accessible. Excel has all this power. You never use it as just one small example for AI, for where we are today, particularly if it's an LLM as, as the technology I actually believe go for it. Like literally do everything. If you've got, if you've got 15 ideas and five of them are good and five of them are weak and, or middling and five of them are pretty laughable, don't call them, do all 15. If it costs too much, do it anyway. Find a way to make it happen. And the reason why is this is a period of experimentation. Nobody really knows what's going to work or not. The famous study that's, that was from a division related to MIT, but not really MIT from the fall that spoke about 95% of corporate pilot projects not working. I thought, I wasn't surprised by that. I was really surprised that like wow, 5% are, that's great. The reason why is by the time it's a corporate project, it has the buy in from the CEO, it's topped out, it has a budget, the general counsel was involved, the IT people are involved. That takes a lot of effort, right? That's a real project. That's not where I'd look for the value of AI. What I'd look is to ditch it from the corporate hierarchy top down. Try anything, I'm panicking approach of management. And look at the atomic unit of the individual. Not 9 to 5 when they're in the office, but 5 to 9 when they're at home. When an executive has to get something done for the, the next day and it's just screwed and he or she has to, has to use AI in a creative way, in an imaginative way that they think they could do before they optimize around their own work needs. That is where AI is going to be experimented best. That's where the learnings are then going to scale and then go up to the organization. So I'm very bullish on organizations, particularly the atomic unit of the individual, the executive who comes up with an idea from the bottom up as well as the top down and says, let's try it all. This is an incredible, incredibly febrile moment and it would be ridiculous for someone to apply the same rules of business to that are very probate and sober and patient and careful at a time when there's so much happening and the innovation cycles can be very quick and the cost of innovating doesn't have to be that expensive.
A
So, so you see it more as, you know, you're encouraging, if I'm using the analogy, right, it's almost like a petri dish of just like, try everything. We need to experiment as, as broadly and as grandly as possible. And, and you know, we'll know the winners when we see them. But let's, let's throw it all at the wall for right now. Is that fair?
B
Totally, yeah, Totally A playground, right? It might be the example like. And just, and just play, play with it at the Economist. That's what we have done, right. And our editor was very, you know, I was very probate and responsible and careful and judicious and nervous about all the things that we were doing. And then in one of the meetings, the editor basically blurted out, look, I know that most of these things are not going to pan out, but we'll be ready when this, when we have something that will pan out. And I thought, yep, that's exactly it. That is exactly it. Let's go, let's play, let's lose money, let's experiment. But if the risks of not being primed as an organization, as a culture, when we, when there is, when the moment's right, that's really the danger. So we're make, we're doing lots of things and some will work, some won't, but none of them are going to reshape our business. However, I'm sure that we are going to be the sort of organization that if something does across our spheres, that could reshape our business, we're ready to go for it.
A
So, so, you know, can you tell me a little bit more about, you know, the, the approach that the Economist. Because, you know, on the one hand, you know, you're to the conservativism argument, you're, you know, holding and caring for this, you know, this very high brand value institution that's been around for, you know, what, probably around 150 years. And, you know, I'm sure that comes with its share of responsibility. So what's been the posture for some of these experiments? And can you give me a sense of any of the, you know, early, exciting, you know, projects that you've said, wow, I didn't know this could do this for us versus the things that are going to be, you know, clear losers here?
B
So, yeah, so we never knew what the clear losers would be. I mean, I guess the first thing that had to happen is we needed a different mental model. LLMs generative AI generates content like, not a rocket scientist to know that, but once you realize that, you realize that you can create more, right? So whatever you're doing, you can create more of it. And so there was a lot of people throughout the organization that had lots of ideas about creating more. And we had to have a very serious and sort of come to Jesus moment as an organization to say, hey, hey, every other media company on planet Earth is talking about creating more. So there's nothing differentiated about it. What makes the Economist unique, what makes it special, distinctive, and the source of our value is not that we have more, so we have less. So how can we create in a world in which everyone's creating more, how can we do something in which we're actually creating less? So suddenly, instead of having a generative AI, a system where people could query our information and get another article, we were doing article summaries, making our articles tighter and shorter so people can actually see just the gist of the piece without actually having to read the whole thing. Now, was that going to cannibalize our articles? Well, let's experiment. Let's find out. If it sinks the ship, we've got a problem. But maybe it's going to enrich, add more value to the reader, and that would be a good thing. And so that's what we're doing. So we started doing that. Now we needed to put a lot of command and control around it. We didn't have the AI just go off on its own. We have human editors curate what it generates, but we're using it in that way. That's a really low stakes. It seems like it's a very low, I think it is a quite low stakes thing to do. But the important thing was the reframing that took place, the reversal, right, of saying, this is what it's used for, but actually we're not going to use it for that primary purpose, we're going to use it for a secondary feature of what it can do. And that helped us. We have internal tools. We've got this one woman who responds to all of the journalist queries, whether it's how do I avoid getting kidnapped when I'm going on this particular assignment versus, you know, you know, how can I, you know, get my housing allowance for this area because it was missed in that payment cycle. And her name happens to be named Ann. Well, lo and behold, we've got a chatbot called Ask Ann, which is all of the Ann's data on what to do in certain circumstances. So we don't actually have to ask the real Ann. We can actually ask the computerized Ann, the cyborg Ann. And if that doesn't work, then ask the real annual. And so it's a lifesaver for her. I one story, one element of what we've done, which I love, is we started translating.
And did audio translations of some of our smaller little news briefs. And they were good and we loved them, and the consumers liked it. And we looked at it and we thought, you know what? We're not going to continue with it. It was just a little bit too much effort for the payback we were getting. The translations were good, but they weren't good enough. The audio version was good, but it could have been better. And we just felt, you know what, like, in a world of constrained resources, this was one thing that we should pull the plug on, stop and put our resources into something else. And what I like about that is it's very easy to start initiatives. It's often hard to discontinue initiatives. And this is one example where there was something wrong with it. But, but we thought, you know what, let's put our effort in somewhere else. And so we stopped doing that. And I think it was great institutionally for us to see that we have the sort of flexibility, that we could actually go from one thing to another.
A
No, that's great. That's a great organizational muscle to have. I completely agree with you. And I've worked with a lot of organizations and seen it a thousand times where things proliferate and then they never die, and then it just becomes this bloat for the organization. But that's, that's really exciting, Ken, about some of the projects going on there. Really, really interesting to hear, especially from an economist reader, coming back to the ands of the world and the human impact.
One of the things you've been writing about is basically how humans compete in this World that's increasingly algorithmic, where we're talking about how it's more agentic or augmented by machines. Can you share a little bit about your perspective there and what skills you think are most important to the people and the leaders of the next generation?
B
Great question. So the skills for the doers as opposed to the leaders for the doers, it's going to be curiosity. You just have to see the world and think imaginatively and, and just be interested in lots of different things and not for any utilitarian reason, but just for the pure bliss of being curious and learning new things. Because where your mind meanders, you'll find gold. What was the famous phrase from Here with a Thousand Faces, Joseph Campbell where you stumble, there your treasure lies. And I think that the only way to stumble is just to forge in the dark and then fall over something. And what you fall over is might indeed be a small pot of gold, something valuable and new, particularly if it's something that other people haven't thought before. You're definitely going to need to be a strong willed character because if you are actually tumbling upon the new, the world will not love you for it. Everybody says they want the new thing. Nobody wants the new thing. They, they give Socrates the hemlock to drink, to kill him. They crucify Jesus. They, you know, they laugh at Ignatios Semmelweis, the Hungarian doctor at the Vienna General Hospital, who had the most preposterous idea of asking all of the doctors who were performing live childbirths, or I should say childbirths, to wash their hands prior to delivering the baby. And they thought it was absolutely ridiculous. But all the midwives were the, the children being delivered by midwives were all surviving. The children and the mothers being delivered by the doctors were all dying. The doctors were performing autopsies. They were never washing their hands. And they all laughed him out, laughed him away. But of course he was absolutely correct in that. So innovators always look ridiculous. So you need to have strong will as well and some self confidence when you're doing this.
This sort of cognitive forging with curiosity to learn new things. For leaders, it's a little bit different. Leaders need to enable those people. Leaders may be the people who have the ideas, but maybe not. And they need to carry that, they need to own that and sort of take it on the chin. By the time you've played the politics to get very high, or by the time you've had to sort of be an institutionalist and a bureaucrat to become a manager, you have other Skills that are important to the organization like constancy, temperament, reasonability, sobriety. It's probably not zaniness, taking risks, going out on a limb, making a case, annoying your peers and your colleagues because you're set, you're, you're still going off on this one thing that they think is, shouldn't be and doing something else. The gadfly is a person who changes the world, but they are enabled by wise leaders who let those people pursue their curiosity and pursue their bliss. So the leaders need to set the culture that enables it. If they don't do that, if they think that.
They'Re going to be a risk averse culture or they think that they should be the ones to innovate and not the others, that could be a recipe for, for problems. I don't think it's a recipe when it comes to people like Steve Jobs. But Steve Jobs is 1 in 1 in 100 million, right? Not everyone's going to be Steve Jobs. And the, the wise leader probably says all things consider considered, I'm probably not Steve Jobs. However, there will be one or two. However, for, for most leaders, the best thing they can do is to build teams that have a culture of curiosity.
A
Well, and I love that. And I'm just kind of processing what that means throughout an organization because it's, you know, an organization of any size is not a leader in their teams. It's, you know, a series of leaders all the way up and there's, you know, it's funny, you know, one of the things I've reflected on from my career is just, you know, people, people love to dunk on leaders as, you know, oh, they're useless or, you know, who cares about them, why do we need them? But the impact that a bad leader can have anywhere in the organization, I've seen really, really tremendous negative impact there. And so as you think about whether it's at the Economist or whether it's with any organizations you've spoken with or worked with, how do you make sure that culture, does it have to come from the top down? How do you make sure it's kind of proliferating throughout the organization and not just in little pockets.
B
So ideally it's going to proliferate throughout the organization, but the reality is it might be only in little pockets and you just have to accept that lightning's not always going to strike everywhere in an organization. You might not want your innovators to be in hr, but you do want them in product. Or you might not want them in product, but you do want them in marketing. So that's the first thing. So I wouldn't say that the whole organization has to be that way. How else would you do it? I agree with you that leadership is really hard. I mean I think the, the best thing that you can do is to train your managers to be better at what they're doing of working with people rather than doing that sort of 20th century model which has always failed, which is to take the person who's good at doing this one thing and then think that they can become the manager to have all those other people work as well as they do. And that's usually a recipe for disaster. It's rare to take a journalist who's really good at being a journalist and make them an editor and say now you have to manage other journalists if they're not a very good manager. In the case of a newsroom that's particularly problematic because we see it all the time. Because the nature of being a very good journalist is to be tenacious, to be opinionated. It's also most of what you do is actually not social but very insular, very quiet. You're reading your writing. Those are very much the skills that insularity which is very awful to be a. To be. To run a team where you need to be a bit more extroverted. You need to put on a public face of confidence even if you're racked with lots of doubt, which you can be as a journalist. Yes, journalists sometimes are self, self knowledgeable only on the rarest occasions, not often but it's been known to happen. Whereas the manager. So the manager has a particular role to play that's different than the. The doer underneath them and that that shift doesn't always work very well. So there's two possibilities when you choose better. So the, the best person of the do at the doer level might not. The person who's not great as a doer might indeed be a very, very good manager. Or if you wanted to take that person who's good at doing and you make it put them into a managerial role, flood the zone with resources, give them good coaching, good mentorship. Probably coaching from the outside as well like a corporate psychologist to just to talk through what they're doing so they can become a very. A better manager here. And then as at a supervisory level, you just hear from the underlings that you know how is that person doing? Are you happy here? Can they keep their staff? What do they like best? What should the person be improving? That term psychological safety is so important great teams. You see it on a playing court in sports where you can actually watch it live and the points tell their own story in terms of as they accumulate and the hoops go in. When people are working as a team, they can so far exceed the abilities as individuals because there's a real synergy effect to it. Sorry for all the Dilbert like cliches, but it's true. So the message would be invest in the managers to get the. Get the most out of the teams.
A
I love that and I totally agree with you. But there was another point you made there too, which was the power of the teams over the individual. Right. And to what degree is it inherent in that, Ken, that it's not?
Is that a rejection of kind of this cookie cutter approach of we hire engineers and every, we have an engineer scaffolding and every engineer should come in this one mold. How do you build a great team and how should you be thinking about the individual players in it and does that need to scale or is it just looking at people as people?
B
So it's strange. I mean, I think it's about values and it's about. I think the first and most important value is being courteous. Sounds crazy. Sounds like I'm an old school kind of person, but I'm not. There's a term actually in.
In cultural evolution.
A weird form of quantitative social science in which they look at how human civilizations form and develop. And the motto in that domain is it's better to be social than to be smart. And the reason why is they show that if you have to be smart about the world, whether you're hunter gatherer, you're a Roman centurion, et cetera, it's all the knowledge that you can possibly accumulate in your own mind and process and then do. But it'd be so much better if you could learn from other people who've already made certain mistakes and therefore not make those mistakes before you've made them and not have to learn it for the first time yourself. So much better to, you know, talk to one guy who says, yeah, don't touch the red coals. Another person says, oh yeah, the red coals are really bad. But there's also white ones and blue ones. So anything that if you feel like it's going to be hot, don't touch it, rather than to touch it and burn your hand. So it's better to be social than to be smart. So how does that play out as a group? What I've seen organizations work really well. It's because people have a profound respect for each other. That, that. And even if they are rivals or even if they don't have that respect, they're courteous. There's. There's been instances where.
I've had to interact with other entities, not within the Economist, but outside of it. And I have a sort of no asshole's rule, as, as the term goes, which is we can see the world differently, we can interpret a contract differently, we can have different objectives, we can butt heads and we, we can find ways to work it out or maybe even leave. But if they are saying something a bit, you know, paternal, if you will, maybe, but they say something to my staff and it's inappropriate, they've got a big fucking problem on their hands, right? They have to. They're gonna either have to justify it or make an apology, and it's probably gonna be the end of the relationship. And I think there's no other way. Like you defend your staff from people who are discourteous, and you also respect. You expect that from your peers, from your boss, from all, all the people around you. And there should be never an instance where you're not actually acknowledging the, the dignity of other people.
A
I feel like I'm learning a lot about you as a, As a manager and a leader.
B
Ken and brass knuckle Tourette syndrome leader. Exactly.
A
No, it's. It's great. It's great. It wasn't necessarily where I thought we would go today, but I'm like, I. No, I, I think it's really, really valuable, you know, kind of leadership lessons there, there. But we're also sort of backing into a question here about.
The role that AI can and will play in organizations and what it can replace and what it can't replace. And you talked about this kind of dichotomy, or maybe that's too strong a word, between the social and the knowledge itself.
Is it as simple as. Well, it's not going to replace the relationship, so you need to get really good at the social. And if you just know things, you're in trouble or how would you sort of frame that out?
B
So first you have to respect the technology, and no events going to be able to do things that you can't suss out that you can't do. Secondly, you need to work with the technology, with other people. In some instances, the technology is going to replace other people and you're going to have to have difficult conversations with them. I think more commonly, you're going to need people to supervise the technology. There's a Great paper out by some Princeton computer scientists called AI as a normal technology that makes that case. And they do a very good job of showing that just as we had power tools, we didn't sort of give the machinery and the robots a chance to do their own thing. We had human beings set the parameters of what it would do, and then they supervised it and they handled the edge cases and they said, why would this not be the case with AI as well? And we're already seeing that with AI really idiotic companies are sort of handing it off to the machine and letting it run amok. And smart companies are saying, actually, we want to find the right ways in which we apply it, but we want human beings to supervise it. So I think the. We're going to need people to be better at being people, bringing their humanity to their work, and we're going to need managers who are just better at working with people and acting more like coaches. That, in fact, has been the great trend in the last 40 years in American business in which the manager has gone from being the supervisor to. To being the coach and the mentor. You're seeing it everywhere, but all you have to do is pick up a book for a management book from the 1970s and then a management book today, or talk to leaders today, and you absolutely see that. And it's. And the progress that people have made has been exceptional. One of the reasons why the American corporation has been outperforming so many other corporations is because of the cultural aspects of enough leadership that we actually flood the zone with people to be people and to not be only transactional, but relational and to get the most from other people by recognizing that they're bringing themselves, their. Their whole selves to work and that they want to do a good job. And their role as the boss is to be the mentor and the coach and the inspiration and not just simply the person who, you know, like in Charlie Chaplin's Modern Times is glaring and making sure that the guys at the assembly line turning the wrench.
A
Yeah, the guy holding the stick, so to speak.
B
Right.
A
Yeah. So I want to pull on a slightly different thread you mentioned earlier. You know, this notion that you kind of need to have a strong stomach for a lot of the innovation stuff here. And for, you know, whether it's AI or any type of change, there's going to be a big, you know, backlash here. Whether it's internal to your organization, whether it's with consumers or society, people are just, you know, have limited appetite for change. And I'm curious, you've written previously on, you know, what was called at the time a tech lash. And I feel like I haven't heard that term lately, but I've experienced it in a lot of ways around AI, even if people haven't used that label. And so I wanted to ask you a couple of things, Ken. The first one is what does that look like from your perspective now versus how it looked seven or eight years ago, this tech lash against big tech and against these technologies that people are worried about? And then do you find that that informs you at all as an editor around the messages you want to put out into the world around technology? Are there things that we have a responsibility as journalists or as technology advocates to share in a world where there's so much anxiety about this?
B
Jeff, those are great questions. So to the first element of the tech lash and where it is today. I think we actually do have an AI lash. Right. The tech lash has still been there, but it's. You're not hearing about it. I think in part because for people of a certain generation, of a certain age, we've just become a nerd to it. We just accept it as the thing rather than as something different. But also, if you look at the usage rates of a lot of social media platforms, it's plummeted. Whether it's Twitter x or Facebook, LinkedIn is still doing well. But the, but there's a lot of, a lot of people who are abandoning social media because they are uneasy with it. However, it's large. But for, again, people of a certain age, they just take it as the status quo for younger people. And I'm thinking 30 and under, 25 and under, they are very anti AI. And that's really interesting to me. After spending, you know, their, their four years of their teenage years with their phones, you know, surgically implanted to the palm of their hand and they're pressed up against their eyes and they now are trying to get rid of it and go for walks and do other ways in which they can try to find deeper meaning in the world rather than just simply be feel like they're victims and hostage to the machine. It reminds me a little bit like the anti smoking movement in the 90s and 2000s. The difference there was that there was a cohort of people who, it wasn't about cancer and it wasn't about the cost. They didn't want to enrich big tobacco. They just understood that it was just Madison Avenue and there was something rotten and awful about these companies and they just wanted to avoid it. And so just as we have big oil and pollution, we see that we've got big tech and a different form of cognitive pollution. And so people are rejecting it and AI is absolutely being rejected by younger people.
There was a second part to your question I've totally not forgotten.
A
The second part to the question was, as a journalist, given that there's some of this AI backlash, is there any sort of journalistic responsibility there? As a technology advocate?
B
We have a responsibility.
To honesty and to truth.
To being. You can't be objective, but you can strive to be impartial. And so I think that journalism should be, should hold a mirror to its, to society, but not a mirror that is naive, but one that's informed. So it decides to actually, I mean, there's. On one hand it's a mirror, on the other hand it's a flashlight. And the flashlight is where do you put the bean? And so you should. We should be wise enough as careful custodians of the, of how some people interact with the public, with a certain portion of the public sphere. Don't want to say that we speak for everyone and all of that, but I think we have an audience and they trust us. So we want to live up to that trust that we should beam that light in the areas that are concerning to us and therefore to them, and vice versa, which is areas like.
What are the pathologies of technology as well as the benefits of technology. In the case of the retina scans that we began with, that's low stakes. The data already exists. It's being used for this purpose now it's going to be used for this. One of. That's also a beneficial one. But if it's to identify the racial makeup of someone or other characteristics that could lead to harm to that individual, then we should say, hey, you shouldn't use this. And how do you build the proper safeguards around the misuse of that information for this purpose? And that's what good journalism should always be doing. And it's, it's harder to do today because it's costly to do. You need good readers who respect that and honor it. Yeah. And we're in a world in which I think the press has been, has not always lived up to the expectations and that it, that it owes to itself.
A
Right. So let me ask you maybe a more specific question about that. So I absolutely agree with you and I love the use of the word impartial there. And I have to imagine in some cases.
With this beat around AI, there's so much money at Play. There's so many influential voices trying to sell you their version of the future and there's so much marketing at play. There's so many different voices telling you different things. Is there anything that you're consuming now or that you're hearing that you're most skeptical about or that you think is bs that you try to make sure you make clear in you and your team's journalism that this is boosterism or readers should be skeptical of these, these messages.
B
So we've always been saying that these things are boosterism and readers should be skeptical. In fact, our whole nature has been to look at, to be I think, very balanced in our coverage going back years. So we've so almost as a sort of a muscle memory, as sort of a, as a velted shawang sort of as a worldview. The way that we see the world is not to buy into the hype. If anything, we try to disentangle what is actually happening and what is legitimate from what people say. I'll give you a strange example about, related to Covid, which I think would be useful because it's almost like the most glaring way in which many people in the media fell down about six months, nine months into the first lockdown. So this 2020, a group of scientists got together and they went to a place called like Great Barrington, I think in Massachusetts or New Hampshire and they created something called the Great Barrington Declaration. And their idea was the idea of a lockdown altogether wouldn't be. Is, is ridiculous. You don't want to lock down the whole society. Just, just protect the vulnerable people, keep them separate from the people who in the rest of society. And if you do that, you'll protect the vulnerable and you'll let society to function. And if not, the drawbacks of, of, of locking down everyone else is going to be manifest in terms of lower income and marital disputes and et cetera, et cetera and kids not learning. And there was a whole dimension in media it was interesting to see that had just a knee jerk reaction against it because it just, it, it didn't fit the narrative that a lot of people had. There was other people who looked at these academics and they said oh yes, well he's a professor of neurology at this, at Stanford, but he's not an epidemiologist, so he's not of that the right specialist. And they were sort of cutting it down. So the economists, interestingly our science team, we've got some remarkable people here. They did a three page analysis of what they were saying. And they came to the conclusion that it wouldn't work and it wasn't the right idea. And at the time, I had a podcast on science for the Economist, and I remember interviewing one of the people involved in it, actually people on all sides of the spectrum of it. And I was struck by the people involved being so grateful and thankful for our coverage. And I said, yeah, but we said it was bonkers. And they said, yeah, but you took it seriously. You took the time to think it through and analyze it point by point and come to your conclusion. We're so grateful that you had the integrity and the honesty and the, the goodwill to actually treat it substantially and rather than just with a knee jerk, preconceived notion about it, just as we did when it came to that instance in Covid, so too with artificial intelligence, and I hope with all things that we do, we come to it with an unassuming form of intelligence that says we have our values, we, we think certain things, but we should look at and examine it on its merits. And we're not going to be swayed by what some marketing department says or what someone else in some other news organization says. We're going to think for ourselves because our readers expect that and they want to, they want to think for themselves as well. So we want to give them both sides of the argument.
A
Well, first of all, thank you, thank you for doing that as a reader and as a think for yourself advocate. Because it, I mean, I'm sure you face it in some, in some ways, and you've talked about it already in terms of, you know, what the Economist will and won't do and the, you know, turning down a more is more approach because it feels like, you know, as a consumer or just a person, you know, living at this moment in history, we're just being inundated with more lower value crap that it's like, don't think about this too hard or don't think about this for too long because it's not designed for that. And I don't, I don't know, it's hard to believe that's good for us as a society and for our ability to think critically and make the right decisions and, I don't know, be good people. Ultimately.
B
I think it's a serious problem. I mean, the question that we struggle with is we're a subscription based product. And so although there is a small tribe of people that are our audience, 1.5 million people who pay for us thereabouts, but we, but we. It would be interesting to Think, well, how can we have even a bigger impact? Granted, a lot of those people themselves are journalists or opinion formers in other ways. And so there's a trickle down effect of the integrity that I think we would bring to understanding the world that then gets disseminated more broadly. But it's. But we wish that we had an even bigger impact still.
A
Yeah, no, it's so important and I'm curious, and I don't know the answer to this question, but do you have kids, Ken, and if so, what's your posture toward their, I guess media consumption and social media consumption.
B
Yeah. Have children. One has no interest in social media and a smidgen of media. But that's his thing for the, for the elder who's a teenager there, you know, it was tough. I mean social media became, was very sort of pulled them away from planet Earth for a while and now they're rebelling against it, which is really interesting to see. They're going for long walks and they don't want to be sort of victims of the machine. And that to me, and seeing that was one data point. But then I started seeing and hearing more about it. Now I'm ready to go out and say, hey, it's a thing. The AI backlash has happened or is in the process of happening.
A
Yeah. Which is, you know, and maybe this is a weird thing for like the host of a largely AI driven podcast to say, but like I, I love that. Like, that's very exciting to me that people are willing to like kind of, you know, detach from that and you know, look for meaning and question those things. We've talked about, you know, a lot of advice in a lot of different forms here. What's your best advice these days for business leaders just trying to navigate this entire AI and technology landscape?
B
Learn, learn, read, read about the technology, how it work, just the basic stuff, generally speaking at a high level. How it works where, what the trends are, have people that you talk to on a regular basis that you trust in your organization. So I've called like lunch and learn conversations. You can imagine an hour and a half, feed them, you know, if, you know, just get, you know, high end, you know, takeaway food into the office. So there's a little bit of a specialness to a brown bag sort of lunch, but with a little, with a, with a, with a little bit more of a polish to it as well, I would say because it advance them and advance you as well. People will like it and then have conversations where you're actually learning and discussing with your team and when they see that you're doing these things, reading a book like some of the new books that have come out on AI, reading the Economist, talking about articles, sharing an article with with the team that, and then using that as a basis of a conversation, they will start doing that as well and learning. This is a moment, it's such a feel moment in the world because the technology is so new and the risks of not doing the right thing with it and being lost is high. Even though we said that there is time to catch up, there's not unlimited time to catch up. Right. The Internet did destroy Blockbuster and Tower Records. Right? Those were real examples. Sears, Robok, Robok, which was the great mail order catalog company and one of the strongest companies in corporate America between basically probably 1910 and to 1950, 1960 did go bankrupt as the rise of Walmart and Amazon took it. You know, just destroyed the mail order catalog business. So bringing people together to have those conversations I think is really, really valuable. The going back to the boss as coach and mentor, the coach is great because they're choosing great players. And so I think that's a useful metaphor to, to say.
Bring together teams and encourage them to be the best they can be. That sounds like claptrap. I get it. But it actually is true and great organizations are doing that.
A
Ken, this has been a super interesting conversation. I really appreciate your insights and your time.
B
This was brilliant. Jeff, fabulous questions, great audience. Thank you.
A
If you work in it, Infotech Research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy, covered. Disaster recovery, covered. Vendor negotiation, covered. Infotech supports you with the best practice, research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe.
B
Interesting.
Podcast Summary: Digital Disruption with Geoff Nielson
Episode: Go All In on AI: The Economist’s Kenneth Cukier on AI's Experimentation Era
Date: December 8, 2025
This episode features an in-depth conversation between host Geoff Nielson and Kenneth Cukier, Deputy Executive Editor for AI at The Economist and bestselling author of Big Data: A Revolution That Transforms How We Live, Work, and Think. The discussion centers on the current “experimentation era” of AI, the real versus hyped potential of the technology, its societal impact, the evolution from big data to AI, and pragmatic advice for businesses and leaders navigating massive digital change. Cukier's perspective blends journalistic insight, business acumen, and first-hand experience guiding innovation at a legacy media institution.
“AI was able to identify something in the structure of the scans that humans didn’t even have a theory for.” — Kenneth Cukier [03:33]
“The doctrine is: more data, better answer.” — Kenneth Cukier [16:08]
“Bring your humanity to the table...and your deep understanding of the customer and their pain points.” — Kenneth Cukier [21:58]
“If you’ve got 15 ideas...do all 15. This is a period of experimentation. Nobody really knows what’s going to work or not.” — Kenneth Cukier [26:17]
“It’s very easy to start initiatives. It’s often hard to discontinue initiatives. And this is one example...” — Kenneth Cukier [33:49]
“Where you stumble, there your treasure lies.” — Kenneth Cukier [35:38]
“It’s better to be social than to be smart...organizations work really well [when] people have a profound respect for each other.” — Kenneth Cukier [44:14]
“We have a responsibility to honesty and to truth...You can’t be objective, but you can strive to be impartial.” — Kenneth Cukier [53:20]
“Learn, learn, read, read about the technology...” — Kenneth Cukier [62:40]
Throughout, Cukier remains pragmatic, curious, and slightly irreverent—deeply analytical yet approachable. He’s wary of hype, encourages intellectual honesty, and stresses humility and humanity alongside technological curiosity. Both he and Geoff maintain an exploratory, optimistic, and occasionally candid style.
The AI era is here—but we’re still in the “wild experimentation” phase. The essential advice for individuals and organizations: embrace experimentation, stay curious, deepen your understanding, and double down on human creativity and empathy. AI brings unprecedented opportunity and risk; its impact will be broad, profound, and, ultimately, not just technological but deeply human.