Loading summary
A
If you take technology and provide it in an environment where the underlying human forces are well intentioned and capable of acting on that intention, then almost always that group of people will use that technology and have even more impact. But there's also context where the human forces are negative. So for example, there could be corruption in government or in some schools, teachers don't really care that much about their students. And so in any such situation like that, where the underlying human forces are either negative or neutral and or not capable of acting on their intentions, then the technology amplifies that. In some cases it can make the situation worse. More often than not, adding technology just doesn't change the outcomes too much at all.
B
Welcome to the Work for Humans podcast. This is Dart Lindsley. This is one of the episodes I refer back to most often in later episodes Kentaro Toyama spent a decade designing technologies to fight global poverty and improve education and health. As co founder of Microsoft Research India Lab, he made a troubling discovery. Innovative technologies can't create change on their own. New technology is not always better, and even in an age of amazing tech, social progress depends on the humans more than the gadgets. Kentaro's research, described in his book Geek Heresy, asserts that technology acts as an amplifier of of human traits. It improves things when people in organizations are good and capable, but it has little or negative effects when people are careless, dysfunctional or corrupt. Kintaro is W.K. kellogg professor of Community Information at the University of Michigan, a Fellow of the Dalai Lama center at mit, and author of Geek Rescuing Social Change from the Cult of Technology. Before moving to Michigan, Kentaro co founded Microsoft Research India where he helped grow the lab into 60 full time research staff. In this episode, Kentaro and I talk about why technology needs a human touch to succeed. Kintaro's experience leading Microsoft Research India and the ten Fallacies of Technology. We also discuss how to create societal change, innovation versus Tried and true approaches, the law of amplification, three elements of intrinsic growth, as well as other topics. Alright. As always, please continue to show your support for Work for Humans by hitting that subscribe button. And now I'm thrilled to present my conversation with Kentaro Toyama. Kentaro Toyama welcome to Work for Humans.
A
Thank you Doug. It's great to be on.
B
So you wrote the book Geek Heresy, which is. By the way, it's a great title.
A
I owe a friend of mine for that. There was a journalist who called me a geek heretic in one of his articles and so I used that.
B
Well we on Work for Humans need to talk to a geek heretic. And so it's the perfect conversation. And here's why. Basically I'm going to say the thesis of the book. Okay, just so we get that on paper here, the thesis of the book is that technology point solutions in without context, without understanding the situation, without root cause and without a lot of humanity fail. And that is such a shallow description of what you've actually written, but it's a starting place. And the thing is, is that for those of us who work on the experience of work, there's always a push to roll out some new technology to improve the experience of work or to make work different. And the truth is I've come to doubt them very much, which is that I believe that the challenges in companies are human, not technological. And so what was your path to heresy? Where did you start? And when did you start to doubt the orthodoxy?
A
Yeah, thanks, that's a good question. And I would say there's no one moment, but I can definitely describe the path. So in 2004 I moved to India. I was working at Microsoft, for Microsoft at the time. And I moved to India to help the company start a new research lab there. And one of my reasons for going was that one of the missions of this new research lab was to see how we could apply digital technology to, to address the problems of the developing world, of places that are impoverished, of places that often we don't associate with technology. And so I threw myself into that work. And over the next five and a half years while I was in India, I either personally ran or oversaw something like 50, 50 odd research projects, all trying to find ways to use digital technology to address education or agriculture or healthcare or governance in the context of a relatively low income country and low income environment. And we were successful as researchers in that we would come up with these digital technologies, some of which were completely new and customized for a particular purpose, some of which were off the shelf. And we just plugged in and we found ways to use them to help those goals, whether it's agriculture, education or healthcare. And we thought, hey, this is great. But then the moment we tried to take those technologies and find a way to scale them up, we would inevitably run into all kinds of challenges. And my conclusion after five and a half years of doing this was that those challenges were always ultimately challenges of human beings, whether it was a lack of capacity or a lack of good intentions or a lack of institutions to do that work. And so over that five year period when I kept seeing this happen over and over, and over again I started thinking, hey, maybe just having a good technology, even a great well designed one, is not the solution to challenges like poverty or the lack of livelihoods or corruption or poor healthcare.
B
What's an example of one of the things that you rolled out and how it ended up?
A
An example I give in the book, which I think is a great example because it kind of showcases how, on the one hand, you could see the potential of the technology very clearly, but then which ultimately fails, was a project that we called Multipoint. So one of the things that we found, we used all the practices of what's considered good design. We would go into these communities where we wanted to have an impact, and we would really immerse ourselves in that environment. So we would go to government schools in rural India and see how they were already using computers, if they had them at all. And in fact, quite a few had computers. And usually what they would have is a computer classroom, which is a pattern that we also used to see in the United States. And then different classes of students would come into the computer classroom and the teachers would try to help the students use those computers. But in many cases, there were not enough computers for a one to one situation. You couldn't have one student to one computer. And so what would inevitably happen is five or six or even ten kids would surround a single computer PC and then they would all pile onto each other until one kid, usually an upper caste boy, would somehow emerge with the mouse and the keyboard. And then everybody else was just stuck watching. And so we saw this so many times that we thought, okay, well this isn't great. At most one kid is learning and everybody else is left out of the interaction. And so we had a very simple idea which was, let's just plug in as many mice as there are kids. And this was an era when USB was pretty ubiquitous by then. And so it was quite easy to do the plugging in. And then we wrote special software. So there was basically cursor per mouse and it was differently colored and it looks confusing if you watch. But kids figured out very quickly they would realize, okay, I'm jiggling this mouse now. So that thing that's jiggling on the screen is me. And we would write these computer games to help students actually do things like compete in a vocabulary game or practice math skills together. And after very careful evaluations, we found that for up to five kids, you could have a situation where five kids to a PC could learn as much as if they each had their own PC to themselves. So it's great, right? It's just a great story of efficiency. It's exactly what computers are good at, is doing something once, but replicating it and scaling it. And so we thought, hey, at the very least, this is a story of five to one for the same cost. And so we thought, hey, this is a great idea. Let's try taking it to other schools. And we started going to other schools. And what we quickly found was that we often couldn't even get to the point where we could even get the setup to work on a computer in an actual school. Sometimes there was resistance from the outset by the headmaster of the school. Sometimes we found teachers who were very afraid to have their kids do anything with the computers, even when they're there, because they themselves were a little bit unsure of their own computer skills. And then oftentimes we ran into all kinds of technical problems, like the power wasn't there, or the power surges were so intense that we had to make sure that we had to get storage protectors and all this. And then once we got to the point where we were actually in a classroom, the classes are maybe 40 minutes, 50 minutes in total length. And so by the time you set up everything, 15 minutes of the class is over. And then by the time you explained everything to a kid, more than half of the class is over. And so the actual time that we could spend doing all of this was actually very minimal, even in the situations where it would work. So the end conclusion of all that was, okay, well, we have a good technology, but the larger infrastructure and the institutions of the schools are just not ready for this. They were not ready technically. They were not ready in the sense of pedagogy. They were not ready in the sense of trying something that was reasonably innovative in this context. And that's just one example. But I would say in the 50 odd projects that we did, that pattern happened over and over again. We had a good technological idea it would work under ideal conditions, but those ideal conditions were almost never actually out there in the world.
B
What was the motivation of the company in doing this in the first place? And I ask. I've been involved in similar projects. A project to, for instance, something like 60% of the produce that's produced in India doesn't make it to market because it rots. And I was working for Cisco Systems at the time. And it's one of these things where if you have a particular technology and it's a hammer, everything looks like a nail. And we thought maybe this is an information problem. And we're good at information and so we're going to go with that as a solution. Well, it was a refrigeration problem more than anything. But I always felt that although it was honestly altruistic on everybody working on the project, that it was also the idea was to open up a market. So what was the motivation behind your work?
A
Companies are pretty complicated things, especially when they're the size of Microsoft. I would say there was multiple. So first of all, I was in a research part of Microsoft and Microsoft research has historically been sort of like Bell Labs in that the researchers are really encouraged to do open ended research of the kind that they find personally interesting. And that's the kind of environment that researchers thrive on. And so, you know, I think that's the part of the company that is enlightened and wants to see just drive better research. But it's also true that in India, for example, we worked fairly closely with the India sales and marketing team whose main customer in India at the time was the government. And for the government they loved these kinds of projects that we did as a way to open doors, to start conversations. And oftentimes I would be like the show and tell demo that a sales team would bring in and say, hey, look at these cool projects that we're doing. Don't you want to do these things? And then that's the way they would establish a relationship with, let's say a minister or somebody who might eventually become a customer. So there was that side. And then the third thing of course is PR value. Right. So these kinds of projects make the company look good. They love these kinds of projects. In fact, I would say I remember having conversations with other colleagues who did more technical computer science and some of them would say some of our work is brilliant. How come it doesn't get as much press? And it's because it's not as clear to the general public why it's overvalued. And so I think there's at least those three reasons. I think certainly you're half right that some of the intention behind this at a private technology company is finding a way to connect it to their bottom line. But at the same time it was a fairly distant bottom line, at least at the level that we were engaged in.
B
That's my experience as well. And so a great deal of the book is dedicated to the technocratic orthodoxy and the Tech 10 commandments. And what's interesting about them, and I'm going to go through them, I think a lot of our discussion will be about them because I think that they are shared in common with the assumption that I can roll out a technology into a company and it's going to change everything. So I think it does share a lot in common. And they're almost like fallacies. And some of them I'll just say because they're obvious. And some of them I'll want to drill into measurement over meaning. So value only that which can be counted. I see this all the time. I don't know if you have any insight to what's underlying it.
A
Sure. First of all, I'll just say if you hang around even for a couple of months with large corporations and people who work there, you'll very quickly hear something like goals should be smart. And I forget what all of the smart stand for, but one of M is measurable. Right. And I don't know what idiot consultant came up with this idea, but it is taken as the absolute truth in corporate America. And I just cannot think of anything being so far from the actual truth. Right. And you'll often hear if it can't be measured, it can't be managed. And I'll just say that is just flat out wrong. Now I can see why we want to measure Taylor style scientific management, which is the idea that you should measure every aspect of an employee and what they're doing and then try to manage so as to improve those metrics. It's very tempting if you're at the head of a company because it's one way to see progress. Measurement allows you to visualize progress and if you can't measure it, it is true that it's much, much harder to see progress. But I'll also say that in so many areas of our life, we don't measure and we still believe that something positive is happening. So, for example, one obvious context in which that's true all the time is parenting. Now, I'm sure you have audience members who probably have a spreadsheet for their kids, but to most people that sounds anathema. This idea that you should turn your kid into a whole bunch of numbers for the express progress of just nudging those numbers forward. And if you are like most people, the reason why that feels so terrible is because we know that kids are much, much richer than however many numbers you can put together about them and that the decisions you make about them really have to incorporate a bunch of things that maybe in theory it would be nice to have measurements for, but which in practice, effectively you can't measure. How kind is a kid? If you decide to adjust their bedtime. What's the total impact it has on the kid, not just on their spelling test, things like that. And so I think that's one of the things, again, leaders want to be able to see progress because that's the way they can convince themselves. But if you only insist on the things that you can measure and only care about those things, then you're missing, I would argue, the vast majority of the things that you should care about running something as complex as a company.
B
Yeah. And probably the most important things, the next one, quantity over quality. And so the subheader there is do only those things that affect millions of people. I'm going to add a few things in there. Anything that does not affect millions of people is not worth doing is the subtext. And if I can affect millions of people while wearing a cape, it would be even better. But it's true. Technology scales. And so you measure technology in terms of billions of users. I mean, it's how many users you have. So it's tempting to want to go to say something like that. And I think later in the book, you point out change happens a person at a time, very locally. And so this idea that it's going to affect millions of people is not quite right.
A
It's fair for companies and policymakers to think in terms of millions of people. That's really the only level at which most companies and most policymakers, at least at a national level, can really afford to think. But the mistake is in thinking that if it doesn't touch millions of people, it must not be worth doing. That's the error. And I think what happens is individual people all thinking that leads to a situation where we're all doing peanut butter things that we can spread across millions of people, none of which ultimately cause any real change for any one person. So it's a bit of a subtle point. I mean, I'm not fundamentally against quantity, and I do think there are certain kinds of decision makers who can only afford to think about quantity. But it's not true for most of us. I would be happy if 10 people's lives I could change enough that they really felt my presence in their life made such a big difference that they feel that it was worthwhile. That would be enough for me. And if everybody in the world felt that way, we would see a huge change in the world.
B
Hey, everybody. Here are some upcoming events. On March 5, 2026, in Oakland, Robin Zander, the organizer of the Responsive Conference, is launching the first of a new series, the Snafu Conference. It's about something that's important to all of us who are mission driven, which is how to sell yourself without selling out. Remember to use promo code elevenfold. That's eleven fold. To get a significant discount for tickets to SNAFU. Also, big event on March 20th, I'll be speaking at PX Live in London. Luc Omani and his remarkable community of PX leaders are getting together for a one day event. If you want to deliver an extraordinary people experience, this is the single best opportunity to meet kindred leaders. Thanks and watch this space for announcements about my future speaking events. My wife's a schoolteacher and so over 25 years she has taught. Oh, now I have to do the math. I'm going to make it 20 so it's easier. So let's say 700 students she's had go through her class. She's also had about 60 student teachers and she is an extraordinary teacher. So her student teachers got to learn from her extraordinary abilities and take those on and they taught for 30 years or are going to teach for 30 years. So now there's an amplification there that's happening that I think is an example of. It's one at a time. Every single one is one at a time. But it does add up, right?
A
It does add up exactly over time. And because of the likely depth of her work and her impact on some of those people, the adding up has a lot of depth and quality to it as well.
B
Yeah, my father had 800 academic descendants by the time he died. That has to do with living a long time while they continue to reproduce. So the next two are related. And the second one is harder to understand. And so I want to anchor on the second one ultimate goals over root causes. That's the first one focus narrowly on the end goal to achieve success. And the second one is destinationism over path dependency. Ignore history and context and take a single hop to the destination. I want to talk about path dependency as something that's missed. So if you could expand on that.
A
There's a lot of contexts in which the goal is very clear and it's very seductive to try to find ways that just get you directly there. For example, let's say one of the things that economists say about industrialization is that in most of the rich industrialized world, including the United States, something like, I don't know what it is exactly, but I think it's about 70% of the working population works for a reasonably large organization like a company or a government. Whereas in much of the Poorer world. Many, many people are micro entrepreneurs. They are scrounging a living based on selling things here and there and they're their own, technically their own boss. And so one way to say, well, if we want to get to rich world status, then the answer must be we just form a bunch of corporations and employ more people, because that'll get us closer to 70%. Right. But it's not necessarily clear that that would actually work. There's a whole bunch of things that need to be in place for that kind of situation to work. Everything from laws that are good for just general private businesses and that support businesses employment laws so that workers don't get completely ripped off in the process of working for a large company, anti monopoly regulation, et cetera, et cetera. Right. And so all of those things need to stack up. And if you look at the history of the economic development of most of the rich world today, those pieces were built up year by year, decade by decade. And so if you just jump to the point of, okay, I'm just going to found a bunch of businesses and they're going to employ a bunch of people, it's not necessarily the case that you're going to reach the goal that you want. Another example of that might be like with technology, one of the things I saw very early on was we have the digital divide. Rich countries, there's 50% penetration of PCs. In poor countries, there's almost none. So the answer must be, let's give everybody a PC. And today we can see how. Well, by the time you could even have gone to such a goal, the parameters have changed, even in the ritual now, very few people actually have actual PCs. They might have a laptop. Many people don't even have either. They have a smartphone. And so it's not clear that these kind of visible goals, you know, this is connected back to measurement again, are necessarily the right things to be chasing. What you really want is a population that is well educated, that can take advantage of technology, whatever the technology is, and that can quickly adapt. And so this kind of thinking that the destination is based on some role model and then assuming that just getting there is the point, I think is mistake it.
B
You know, one of the things that it made me wonder, one of the issues is solutions looking for problems. And so I am a species of tech optimist and the species is, I think if you put good things out there, the problems will find them as opposed to applying the coming in and saying you need a PC. In fact, I have my own law. It's the Law of things just lying around, which is that innovation happens when there's enough things lying around that you can assemble them into a collage that actually does something new. That there's a way in which putting things out in the world and seeing surprising ways in which people use them that you may not have predicted. I was in Ukraine right after the Orange Revolution and the cell phone had made that possible. Why people could organize in ways that they'd never been able to before. They could have mob responses because everybody got on the phone and said, hey, I'm going to the thing. So when I look at destinationism, it strikes me as that it's related to that question, which is thinking that a specific technology is the solution to a specific situation without understanding the specific situation. But if you let it, the specific situation might find it.
A
I think that's true. And I think what you're articulating is a theory of how technology becomes useful. But I would also argue that it's not necessarily a theory of how to improve society, because it's not necessarily the case that every societal problem can find a solution, even with all kinds of things lying around. In other words, I think it's a good way to say, okay, if I have a hammer, then let's. Let me just put a bunch of hammers out there, and then some problems out there in the world will find that hammer useful. But if you look at it from the perspective of the problems, where every problem is one that you desperately want to solve, then it's not necessarily the case that having a gazillion tools out there will necessarily get you to a point of solving those problems. So I think what you're saying makes sense from the perspective of the tool providers or the people producing the technology. But from the part of if you're trying to actually solve a particular problem, it's just a more complex way of saying what you said earlier, which is hammers looking for nails.
B
Yes, that makes a lot of sense. The next one is external over internal. Do not expect people to change. Instead, focus exclusively on their external circumstances. This is an interesting one for me because what routinely happens inside companies is that we blame the people for their situation. And so it's kind of the reverse thing. In what ways did you see this happening where the technocratic orthodoxy assumes that people are fixed?
A
I actually think this is one of the more interesting things that I came to as a conclusion. And it's interesting because it affects both the extremely technocratic thinkers in this space, including people like economists and policymakers. But it also is true of people who arguably are very human oriented and more on the side of anthropology and the kind of soft sociology. And I'll explain how. So economists generally don't believe in individuals changing. They model individuals as being a bunch of preferences and then you change the external incentives and people move according to the incentives. And so you always hear all the time economists talking about incentives. And incentives are external things almost by definition. If I put money here, then people will go there. And economists are very influential among policymakers. And so I think there's a certain belief about that. And again, at large scale, that might be the most convenient model. But what's really interesting to me is that even anthropologists and others who really look deeply at human beings also assume a similar thing, but they assume it for a different reason, which is the following. I think what it is is they think as soon as you claim that somebody could be different, you are now blaming the victim anytime there's a problem. And therefore what they want you to believe is that every problem is structural and therefore if you change the external circumstances, everything gets better. And what I believe about both views is that they're not taking into account that individual human beings have to change in some deep way for there to be the kind of societal change that we're looking for. Right? So, for example, if you want to end racism, you can create all the incentives that you want. But people who hate purple people are going to keep hating purple people no matter what. What you really want is for the purple people, haters to stop hating them and to realize that human beings are human beings and to act in a way that's consistent with that view. But that requires a deep, deep internal change that again, neither economists fundamentally believe in or work with, nor as far as I can tell, do other types of social scientists necessarily want to believe that a certain kind of individual change is what's necessary. What they'll ultimately all want to do is say there is something out there in the world, some structural forces that we have to work on. And I'm not saying that that view is wrong. I think you have to do that too. But if we don't take internal human change as a very important part of the problem, I don't think that we can cause large scale social change either.
B
I like that. It's very balanced. It balances out my initial statement too, which is that it's not that I don't think people and companies need to change. I just think that it's not 100% of what needs to happen. There's many situational things that are beyond the person.
A
Right? That's right, absolutely. It's really a very complicated aspect of both. You need people individually to change, and you need the organizations to change the next one.
B
Innovation over tried and true. Never do anything that has been done before, at least not without new branding. I think that one speaks for itself. So I think we can go to the next one. Intelligence over wisdom. Maximize cleverness and creativity, not mundane effort. Use intelligence and talent to justify arrogance, selfishness, immaturity, and rankism. We see that a lot. And in fact, there's a class system in the United States in particular, where if you can prove that you're clever and creative, a solopreneur of some kind, and you're making software, you're doing whatever, versus a teacher who is making a difference on the ground day by day, in deep conversation with individual students. The class system says that the clever person is above.
A
Right. I think modern society as a whole puts any kind of innovation on a pedestal and in contrast, doesn't value skill of a kind that may have been handed down for generations or centuries, but which is super important. So again, you're coming back to teachers is great. If you're a really good teacher, arguably there's nothing you have to change from year to year. Yeah, sure, you can probably improve and you should do those things. But it's not the new things that make you a good teacher. It's whatever you're already doing as a good teacher that really matter. And so if you keep doing the same thing every year, at least to me, that's not a problem. I would rather send my son to a teacher who is good than to a teacher who is endlessly innovative but not particularly good at the things that make a good teacher. And yet, in our culture, we don't really have space to honor those kinds of people. You have to have done something new to win an award. You have to have done something new for venture capitalists to find your business interesting, you have to be suggesting something new for policymakers to think, oh, okay, this is something that we should consider. And so I think that's a big problem, especially in a world in which we are in fact overflowing with ideas. But the question is, which ones are actually worth putting effort and time and resources in? Oftentimes, they're things that we already know how to do.
B
I've seen one instance of somebody trying to counteract that. It's called the coats and fellowship. And what it does is it says many people's lives are changed by teachers, but it's usually not the average teacher, it's the extraordinary teacher. And so they pick extraordinary teachers and they take them to conferences. And one of the things that they say to them is, you are amazing. You, you are making a difference. You're really important. And it's very powerful what they do.
A
Very interesting.
B
Yeah, yeah. It's an interesting example of somebody trying to elevate a group that is not as elevated as they should be.
A
Right, right, right. And also the skill of doing that, which may not be a new skill, but which there's. Some people obviously are better at value.
B
Neutrality over value engagement. So bypass values and ethics by pretending to value neutrality. This is an interesting one right now in the face of particular AI And I wanted to bring up the topic of AI didn't know I was going to bring it up here, but I would imagine that you're seeing a lot of these echo through the halls in regards to AI. I want to focus though on the word pretending to value neutrality, because I suspect it might even be pretending to oneself to value neutrality that you don't realize that you're not neutral. What's an example of that?
A
There are social scientists in academia who almost knee jerk, they'll tell you there's no such thing as neutrality with respect to values. And that's really true. You can pretend that you're being neutral in some axis, but the reality is then you're valuing some other value. And so I think the important thing is to be upfront and explicit about what those values are rather than claiming to be neutral. Sometimes one position is neutral with respect to a certain kind of political dimension. And so you can still say I'm neutral with this, but that means I'm valuing this other thing. And I think just being explicit about those things makes a huge difference. Technologists in general, we all think that we're doing something adjacent to the sciences. And so we all think that there's some kind of truth that is completely just out there, objectively true. And that's true of things like physics, but it's not true of what we do with that. And especially with technology. There's always this question of what do we do with the science? And so to what purpose are we putting it? I mean, even this idea, you know, most of computing is in some way shape or form trying to help us become more efficient. So now we're talking about a world in which efficiency is a value, often the primary value, and that necessarily means there's other values that are being crowded out by efficiency.
B
Well, I think one of the previous points, innovation over tried and true, is a value statement. New is better. You quoted. I can't remember who it was you were quoting. It was some marketing thing. And you counted the number of times that they said new in the quote. There was new three times in one sentence, which I don't even know how you do that. Right. So, I mean, it's baked in there. I think that there's a value system in valuing intelligence and cleverness over wisdom.
A
Yeah. With respect to intelligence, one thing I always think about is how for the most part in our world, though, occasionally it comes back up. We think of getting things done by physical power as not the right way. You know, nobody thinks that the strongest person in the world is the best person in the world according to most definitions of best. Right. Occasionally you're, I don't know, you're in the Olympics and you're doing weightlifting, and so suddenly that kind of physical power makes a difference. But in most contexts, we don't believe in settling matters based on brute physical strength anymore. Although arguably a millennia ago, that was the way that most poor political decisions were made, was, okay, my army's bigger than yours. I'm going to crush you, and we're going to decide things that way. We no longer believe in that for the most part. But intelligence is just another kind of strength. And what's funny is we haven't really evolved as a society to believe that basing decisions purely based on intelligence. Right. That the smartest person wins is somehow might deserve a little bit of interrogation. Right. Like that it should be questioned. Is it always the case that the person who's cleverest about a solution really has the best interests of humanity at heart? And I think these days we see so many examples of that not appearing to be the case in the tech sector that I think we're beginning to question it. But it's such a deep, deep value. Most of us want to be smart. Most of us want our kids to be smart. Most of us think that if you're going to elevate the smart kid versus somebody else, it's the smart kid that deserves the promotion. Right. But I would just say, yeah, we do want intelligent decisions, but those decisions need to be guided by what's good, what's morally good. And that's another dimension from intelligence. You can take extreme intelligence and apply it to evil just as much as you can take the same intelligence applied to something good.
B
Yeah, there's whole, like, literary genres Dedicated to geek heroes. I'm thinking in particular of Neal Stephenson in his book Cryptonomicon, where basically geeks won the war.
A
Right, Right. Yeah, I read that.
B
Which there's an argument that code breaking had a big. Was a big part.
A
Right. I mean, you know, as long as they're winning against people who have evil intention, it's, like, easy to cheer them on. But what we find in the world these days is there's lots of smart people whose intentions are a little bit questionable, and yet they're still the ones who get called by, let's say, the White House to advise them on technology matters or whatever. And that doesn't seem to make as much sense to me.
B
So individualism over collectivism. So this next one, let competition lead to efficiency. Avoid cooperation, which breeds complacency and corruption. Any inhibition of individual expression, including compromise, to support the common good, is the same as oppression, which has. I just finally took the time to really read Friedman, who wrote the essay in the New York Times that basically said corporations, their highest purpose is to make money. And I think of Andreessen's techno optimist manifesto, which says that letting technology and technologists don't interfere with them, the net good is better than the net bad. It's a really powerful statement. Any inhibition of individual expression, including compromise, to support the common good is the same as oppression.
A
I mean, I would say that's what we hear arguably on both the left and the right in the United States in our politics as far as oppression is concerned.
B
Right.
A
Anytime someone says, okay, that's oppression, and it's going to lead to tyranny, what they're really saying is, somebody is not letting me do something. And I think that kind of liberty is important. Right? And we have it. And so I don't want to suggest that we want to give up large amounts of it, but I do think that we are now sounding like adolescents, right? We're sounding like teenagers who are like, why do I have a curfew? Why do I have to be back in time for bed? Why do I have to do this? Why do I have to do that? Why do I have to eat a good, you know, dinner with everybody else? It sounds very adolescent, the degree to which we want this freedom. We're not actually thinking in terms of what's the responsibility with the power that we have. What does it mean for us to support the civilization that we're a part of and to help it and possibly do so at the expense of some occasional individual liberties, rather than always pressing for even more individual freedom. And I think that's a challenge we have not just in the technology sector, although it's extreme there's, but also in other avenues of our discourse, especially in the United States. But I think it's increasingly affecting the rest of the world as well.
B
I'm going to find this quote, I'm going to pull it up right now. This quote from Milton Friedman, this line in here, I think is really important. From the New York Times editorial in 1971. I believe the businessmen believe that they are defending free enterprise when they declaim that business is not concerned merely with profit and merely, as in quotes, merely with profit, but also with promoting desirable social ends. That business has a social conscience and takes seriously its responsibility for providing employment, eliminating discrimination, avoiding pollution, and whatever else may be the catchwords of the contemporary crop of reformers. In fact, they are, or would be if they or anyone else took them seriously, preaching pure and unadulterated socialism. Businessmen who talk this way are unwitting puppets of the intellectual forces that have been undermining the basis of a free society these past decades. So what you're talking about is not implied.
A
Right, right, right. Some people are very clear about it. Yeah, Milton Friedman. And then you, you can trace a lot of what he thinks as far as that goes back to Friedrich Hayek. And the funny thing is, history does not bear these views out. So, for example, if you look at U.S. history and the decline of poverty, what you'll find is that since about World War II to about 1970 or so, the United States saw this amazing almost yearly decrease in the rate of poverty in the country. And during that time, the highest tax bracket that an individual could be in was 90%. So if you earned a certain amount of money, everything above that amount of money was taxed at 90%. 90% means if you make a million dollars, you only keep 100,000. Right. So it was exactly the opposite of what Friedman was suggesting. This kind of notion of unfettered capitalism where it's a free for all and everyone just keeps what they make. And yet that was also the time when as a country, all kinds of things were going spectacularly well. Again, poverty was declining. The rate of education and the rate of college education was ramping up. And a bunch of New Deal esque things were actually supporting the growth of a very large middle class and things were going in a good direction. And then starting about 1970, all of that went away, arguably for exactly the kinds of reasons that Friedman was pushing, which was this Idea that we should remove all the fetters to capitalism. And since then, inequality has increased. We have not seen the rate of poverty decrease. If anything, it's gone up a little bit. And so I would argue that history has shown him to be wrong, but he was so influential that we still have significant sectors or very intelligent and powerful people who believe something akin to what he was suggesting. Although I don't think it's quite as extreme as he was suggesting.
B
I want to do the last commandment and then start getting closer and closer to solutions. Freedom over responsibility. Encourage more choices, discourage discernment in choosing any temperance of liberty, including encouragement of responsibility, is tantamount to tyranny. And later on, when we get into solutions, you're going to talk about self control, and that's going to be an important part of the solution. So there was a bigger theme that I kept feeling throughout the book, and one of them is, to what extent is this a critique of the Enlightenment?
A
That's a great question. And I will say that, you know, at least when I was writing the book, I was not as conscious about the fact that it is in fact Enlightenment things that I was pushing back against. What I've since come to the conclusion of is that what you could argue is that digital technology in some way represents the culmination of Enlightenment thinking. It's the culmination of reason over superstition, of individual freedom over collective responsibility, of science over religion and almost every axis. Everything that the European Enlightenment was about. You could say that the digital technology world that we live in now has really taken all those ideas and taken them as far as they can possibly go. And I do think that it's important. I'm not against enlightenment. I think the Enlightenment was a terrific thing, especially for the time. But it's also possible with any idea, to take it too far. And I think that's what we're seeing now is that all of these ideas have been taken too far to the point where there's no counterbalancing of the original ideas with some other force. And because of that, we are kind of on a runaway train where it's not clear where to come back. I think this last one about personal freedom, effectively what we've become so good at, is giving everybody what they want. And that might seem like a good thing to do, except that, you know, what happens when you give kids what they want? They get spoiled. They make bad choices. Right? If you let a kid free at a dinner buffet, they only go to the desserts, right? None of which is good if you just keep doing it. But now we're in a world in which all of us are basically kids and we're all being given for extremely low cost the things that we think we want, but with no friction. That gives us the kind of pushback to think about, okay, is this really good for us as a society? Do we really want to be spending good chunks of our life watching videos of influencers telling us things that aren't even true but sufficiently marketable so that we are likely to believe them and then act on that and meanwhile ignore all kinds of other problems that are mounting in the world? That's the world, I think, that a lot of technology has really given us. And that's the challenge. In some ways, it succeeded at maximizing individual liberty. And what that turns out it means is that we have fewer breaks on spoiling ourselves.
B
So what is the right approach? And in particular, you laid out the elements of intrinsic growth, which are progress in intention, discernment and self control.
A
Trying to make it a little bit catchier. I call heart, mind and will.
B
Ah, yes, heart, mind and will. And you were having to dodge a whole bunch of language that has been associated with other philosophies. But heart, mind and will, yes. What does that mean?
A
I think of it as the three fundamental components of wisdom. So heart is intention. Right. And you need the intention to do the right thing and what the right thing is. Of course, that can be endlessly debated, but for most contexts, most of us know what the right thing is. Right? Most of us know that if you're in a work context and you're a manager, that bullying your employees is not a good thing. Most of us know that if you're an employee and you believe in the mission of the company, that working reasonably hard is worthwhile and helps not just the company, but also you to some extent. And of course, in other contexts, if it's the elimination of poverty, then most of us know that poverty is not something that we want to leave as is and that we should try to help everybody be able to have a productive life where they're not impoverished. So that's hard. Mind is, as you said, I mentioned the book, discernment. And it's really just good judgment. And one of the challenges with that is that that is extremely context sensitive and there are no easy metrics for that. But it's something that, you know, many of us know somebody who's like a good mentor figure, who every time we have a problem, we can go to them and they often have good advice. Those people have good discernment. And then finally, will or self control. I want to distinguish it a little bit from self control in the narrow sense that psychology often uses it, which is it doesn't always have to mean this kind of constant fight against yourself, although some of that is also involved. It's just whatever ultimately helps you do the things that you believe through your heart and your mind are the right course of action. So many of us, for example, know that exercise and good diet and maybe these days taking Ozempic or whatever will lead to a better health outcome. But not all of us do those things. And whatever it is that enables us to do that is what I call will. So sometimes it is a sheer willpower. Other times it might be a system of social interaction where you're always going out, running with your friends, and encouraging each other to eat well. Or maybe it could even be a technology if it helps you right. It's whatever system helps you do what you believe to be right.
B
There's a gap that I don't completely understand in the argument, and it's maybe because I left it out in this discussion. So you went into school after school across India, and you looked at schools that had been successful and schools that hadn't been successful, and you looked at the role of technology in that. And sometimes when there was success with technology, it was because there were dozens of other things in place that made it possible. And there was a line in there that I particularly admired. I think it was a quote of yours, which is, no school can be better than its teachers is basically saying that the quality of the teachers is the quality of school. It cannot get better than the quality of the teachers. And so in school after school, you went in, and where there were successes, it was because of dozens of other good things. And where there were failures, those things weren't there. Technology rode along. You know what we've missed? We've missed the law of amplification. We need to go back. What is the law of amplification?
A
Yeah. So the law of amplification is really a law of technology. And I will say that since I wrote the book, I have not really come across major counterexamples to the law. But the law of amplification basically says that technology amplifies underlying human forces. And what it does specifically is to increase the impact of those human forces along whatever human intention is already there. It's another way of saying that technology is a tool, and it's kind of straightforward. It's almost obvious I would say. And one of the interesting things about it is because it's almost so obvious, you can't really trace the history of this idea to any one person. In fact, many, many different people have come across it and have mentioned it in different ways. I think they also thought it was too obvious to try to turn into something more than just a claim about technology. But I think it's really important to reify it and to think of it as a general principle of digital technology, certainly, and probably other technologies as well, because it has corollaries, it has consequences that directly follow from it, but which many of us ignore or forget or don't even recognize in the first place. I'll explain some of that. What the law of amplification says is if you take technology and provide it in an environment where the underlying human forces are well intentioned and capable of acting on that intention, then almost always that group of people will use that technology and have even more impact towards whatever they were already. So things will get better with the technology. And I believe that's a situation that many of us see who believe that technology is a positive thing. Right? So, for example, in my life, there's no doubt that having a laptop, having a smartphone, having access to the Internet has all dramatically helped my life. But it's on top of an underlying base of good education that my parents and my schools and so on have given me. And it's on top of my overall capacity to try to do something positive in the context of my job, which may be to teach students or advise students and so on, to do better research. And so on top of all of that, the technology amplifies that and leads to a better outcome. But there's also contexts where the human forces are negative. So for example, there could be corruption in government or where there's indifference, where, I don't know, in some schools, teachers don't really care that much about their students, or there could be outright dysfunction. You know, many of the contexts that I worked in, let's say a clinic, nobody there knew what they were really doing and so little patients were not being treated well, and so on. And so in any such situation like that, where the underlying human forces are either negative or neutral and or not capable of acting on their intentions, then the technology amplifies that in some cases it can make the situation worse more often than not. Adding technology just doesn't change the outcomes too much at all. And so that's amplification. And one of the corollaries of amplification is that amplification means that even if you spread technology evenly, even if you don't have a digital divide, the outcomes mean that the rich will get richer and the poor will stay poor. And that's true on various dimensions. So if you're educationally rich, if you have a good education, then more technology helps you even more. But if you start off with very low education, then adding a bunch of technology doesn't change that fact too much. And the gap between the educational haves and the educational have nots just increases if you give everybody the same technology. So I'll give you a very clear example of that. I can do a lot more with the Internet than somebody with a second grade education. That's pretty obvious right now. The technology is exactly the same. You can give me the same access to the Internet and the same laptop and give somebody with a second grade education the same tools and same access. But what I can do with that technology is dramatically more than what they can do. Therefore, there's an amplification effect that the technology has that increases the difference between me and that person with a second grade education. And so one very obvious corollary of amplification is if you just spread technology, all it does is, is lead to greater inequality. And that's a side of technology that I think people often don't believe. They emphasize that technology has a democratizing effect, which I think is wrong. All it does is make the inequalities even greater.
B
And so the idea of heart, mind and will, and that this is something to focus on, sits in a context of you need to have good electricity, you need to have food, you need to have an educational system that can support you in pursuing what you have a will to do. So there's internal components, and this is one of the places where people may change or may be affected to change. And there are external components.
A
Yeah. So, you know, because of amplification, the question comes up, which is, okay, well, if providing everybody with the same technology doesn't help eliminate inequality, what does? And my answer to that is ensuring that everybody has high heart, mind and will. So one of the interesting things about heart, mind and will and growing that is it's actually okay if there's inequality in heart, mind and will, although ideally everybody has it to high degrees. But it's okay if there's some inequality. We don't mind if there are people who have incredible heart, mind and will alongside people with much less, as long as those people, their heart really is in the right place. I don't have Any problem that there are people in the world like Nelson Mandela and Gandhi and that they're much, much superior human beings. To me, I would rather that they exist. What hurts me is people who might have the mind and the will in very high amounts, but not the heart. Somebody who is very smart and very capable of accomplishing their goals, but not intent on doing the right thing in the world context. We might point out somebody like Putin in our current situation. That's what hurts the world. Not somebody with a great amount of all of hard minded will. And conversely, interestingly enough, the more we focus on everybody's heart, mind and will, then hopefully over time that will lead to a world in which things like economic inequality decrease because so many people care about wanting to eliminate poverty.
B
The heart, mind and will you categorize together as intrinsic growth. And I want to read a particular quote which I found really compelling. Those of us who care about social change have a choice to make. The long hard road focuses on mentorship aspirations and intrinsic growth which are difficult to support in a technocratic world. They are not easy to measure, they resist quick scale up, they're fraught with questions of values. They don't glisten with innovation. They violate all of the tech commandments. What is the role of mentorship?
A
What that quote is getting at is first of all, that the question is, are you somebody who really cares about making the world a better place or are you somebody who just wants to do whatever you want to do? And I would say I'm, you know, generally a live and let live kind of person. I don't have any problem with people want to develop technology, develop technology. Although arguably with AI there might be, I have some commentary about regulating it better. But putting that aside, I think it's perfectly okay that there are people in the world who want to push the boundaries of a certain technology and they just want to do that out of curiosity and see how far we can go. I think that's okay. The problem is if you're that kind of person and then you think you want to change the world in a better way and it's going to happen through the thing that you happen to be working on, then I think a different set of requirements applies to you. Because now you're talking about trying to change the world explicitly. And guess what? The world doesn't work the way it does in Coke. And so you have to think about the impact of everything that you're going to do on individual human beings and on society as a whole. And once you start Getting there. Then the question becomes, well, what are you really trying to change? And all of the Ten Commandments, one aspect of them is that they're all about changing the superficial external circumstances of people, not about changing actual human beings. And I think in the end, if you really want to create positive social change, it requires changes in real human beings. It requires rich people to think that their goal in life isn't just to accumulate more wealth, that they should be working also to improve the conditions of the world. It requires people who are less educated to gain a better education. It requires those of us who are in various kinds of positions where we have some amount of power, but maybe not a lot, to use whatever power we have to try to nudge things in a positive direction, et cetera, et cetera, et cetera. And all of those are human changes. I know you were saying about how does the law of amplification apply to situations of work? It's the same in work. Like, in the end, whether we have a good work life or not has nothing to do with the technology we use. It has to do with things like, it's my manager, somebody who actually cares about me as a person, in addition to wanting to accomplish the goals of the company or the organization, or do I have the kind of relationship with my peers where it's actually fun to collaborate with them on something, as opposed to it's a real drain on my energy. And none of those kinds of things actually depend on a technology. They depend on human beings being better to each other. And so, again, the change that we're after is that kind of internal human change, not what technologies will help us become 5% more efficient. And so that's what ultimately that whole statement is about. And mentorship. The point of mentorship is that if there's something that I, as a person with a certain amount of experience in a certain field, can do to help bring about better change in the world, it's to help cause those positive changes in people who might find my advice to be of value. And that's why I emphasize mentorship.
B
And do you implement this with your PhD students?
A
Yeah, to the extent that I can. We haven't talked about aspirations too much, but my goal for every one of my students is. Is to help them meet, to define and meet their aspirations in the way that would be satisfying to them. I think even less than most of my peers. I don't push my own research agenda onto my students, although it's often the case that students who find my research interesting are the ones who apply to work with me. So there's some natural fit, but I don't push them in a particular direction. I usually try to figure out what their real aspiration is and then support that as much as possible.
B
Has anybody tried to excommunicate you? I mean, as a heretic? I mean, this is the path of heretics. You spoke eloquently about how philosophers, sociologists, and historians fight fiercely over this topic, and you waded into it and threw a gauntlet.
A
Yeah, one thing that was interesting was. So the book came out in 2015, and I did a round of talks and meetings with various groups, and there were definitely people who were just outright hostile to the message. They would fight me tooth and nail on this idea that technology doesn't fundamentally improve society. And that wasn't even the point that I was making. It was just that these technologies are not going to help with persistent social problems, at least not in a primary way. But nevertheless, there was some amount of hostile reception. It wasn't significant. I was actually surprised by how little of that that I got. It was often limited to, like, one or two vocal people in any gathering, you know, any large gathering. And then what was interesting was I mentioned the book came out in 2015. I will say I do think it was one of the first early skeptical voices about technology. It's hard to believe now, but until about 2015, people were not very skeptical of digital technology. And even the media was generally always praising new developments, new innovations, kind of in this aww mode, like, oh, my gosh, we can now do this, rather than, okay, let's think seriously about what impact this is going to have on society. And then the 2016 presidential election happened in the United States, and, you know, things like Cambridge Analytica and so on started to come out. And since then, there's been a dramatic shift in tone about the way the public thinks about technology. Like, you know, these days, it's not at all unusual for people to question social media and its value to society. I think those changes are good. I think we've become more critical of what digital technology is doing to us. You know, that critical look, I would say, is not too early. I mean, we're now about to enter a world with AI where the more critical we are, I think, the better it is for society. So I'm happy to have been able to provide some fodder for the critics of technology. I do still think that the law of amplification is just as applicable now and with AI in the future as it has been But I'm also very, very glad that there's been a societal shift that tends to be a little bit more skeptical of what technology can do for us.
B
So I have a few closing questions I ask everybody. They're a little weird. The first one is a marketing question. So I ask a marketing question about work, which is, what job do you hire your job to do for you?
A
You know, in my case, I'm a university professor. My primary job is to do scholastic research. And then, of course, I also have a teaching mission. And then to some degree, I'm involved in work that supports academia overall. I'm involved in committee work at the university, but also in my research community that's international. I play a role in supporting the whole community of scholars that works in the areas that I work in. And what I often tell people is I'm extremely fortunate in that all three of those things are things that I really enjoy doing. And so, on the one hand, my job hires me to do it, but this is what work that I would have done anyway. And so sometimes I feel like it's almost a corrupted situation where exactly what I want to do is what my job wants me to do.
B
And you may have answered this question already, which is, what does it cost you?
A
Not too much. I think the worst thing about this job that I sometimes think about is that I spend a much greater portion of my life copy editing papers that some of my students have written than anything else. And if I could do less of that, that would afford me other time to do other things. But all in all, it's not at all a bad trade.
B
Where can people learn more about you and your book?
A
They can just search for me online. Kentaro Toyama. And they'll probably get pointed to my homepage, which is, as with many people, a little bit out of date, but still has lots of relevant pointers. Yeah, There's a few YouTube talks out there that I think you can find where I talk about many of the things that we talked about.
B
Fantastic. Thank you very much for coming on the show. You're in Japan at the moment. I don't know what time it is, but it's probably early. So thank you.
A
Absolutely. No, it was my pleasure. Thank you so much for having me.
B
Dar, thanks for joining me for another episode of Work for Humans. If you enjoyed this episode, please give us a five star rating. Wherever you listen to podcasts and share the show with one person, person you think would get value from it. Believe it or not, this really helps us grow the show and reach more people who want to build the kind of work that people really want. As always, thank you to my producer Jason Ames at 9th Path Audio for his insights into content and his high standard for quality. Final note, the opinions shared here are my own and not the views of Google or Cisco Systems. Thanks again for listening. See you next time.
Work For Humans
Episode: Technology Alone Won’t Change the World | Kentaro Toyama, Revisited
Date: February 17, 2026
Host: Dart Lindsley
Guest: Kentaro Toyama
This episode revisits Kentaro Toyama’s influential critique of “technological solutionism.” Drawing on his experience founding Microsoft Research India and his book Geek Heresy, Toyama argues that technology cannot create social progress by itself; instead, it amplifies existing human capacity and intentions—whether good or bad. The conversation explores the limitations of technology-driven approaches to social change, the persistent fallacies driving “technocratic orthodoxy,” and the vital importance of investing in human intrinsic growth—heart, mind, and will—over seeking external, tech-based fixes.
Dart and Kentaro explore Toyama’s critique of “technocratic orthodoxy”—a set of persistent fallacies about technology held across industry and policy:
Measurement over Meaning [13:45]:
Quantity over Quality [16:16]:
Ultimate Goals over Root Causes; Destinationism over Path Dependency [20:01]:
External over Internal [25:39]:
Innovation over Tried and True, Intelligence over Wisdom [29:36], [30:34]:
Value Neutrality over Value Engagement [32:44]:
Individualism over Collectivism, Freedom over Responsibility [37:48], [43:09]:
Societal Implications [50:03]:
Application in Workplaces:
Policy & Society:
This episode delivers a thorough critique of “technology-first” thinking—whether in business, global development, or AI. Through vivid stories and careful reasoning, Toyama demonstrates that social progress depends on people, institutions, and values, not gadgets. Technology alone doesn’t change the world; it simply amplifies human nature for better or worse.
Final actionable insight:
For anyone seeking real change—at work, in organizations, or in society—invest first in developing people’s hearts, minds, and will. Use technology as a supportive tool, not a substitute for human capacity or purpose.
Learn More: