
Loading summary
A
Hey everyone. I'm super excited to be sitting down with Vivian Ming. She's a computational neuroscientist with extensive research on AI and human potential. If you don't know her, she's a self proclaimed mad scientist with no shortage of hot takes. And love her or hate her, you will not be neutral or bored. There's a lot of talk these days about companies replacing people with AI. I want to ask her if this is a good idea or a terrible one and what we need to be doing to stay AI proof. Let's find out.
B
I am speaking for the first time ever at Davos. They had good enough sense to keep me away from the billionaires prior to that, but they finally made the terrible decision of inflicting me on the world.
A
What are you, what are you speaking on?
B
So, two things. This yellow book poster behind me, that book comes out on March 17, robot proof. So generically I'm speaking about that more specifically, I have a matched piece of research that's also coming out which is about hybrid intelligence as I'm calling it, or in length, hybrid human machine collective intelligence. And our finding is not only is if formulated correctly, and that's the real, that's the finding in the paper is it's actually about the human capital and how people engage with the AI and what the AI is supporting. Not just generically people and machines together. And definitely not machines do the boring stuff, so you can do all the fun stuff that achieves nothing. But the finding is actually, and it's still in the works for final submission. But the finding is essentially that a team of modestly intelligent and completely naive individuals in an hour can out predict collie market when they're in this hybrid intelligence context. And that there are ways to induce it, not just relying on having a bunch of geniuses in the room. So that's the finding of the paper and I'll be talking about it because the whole theme of Davos this year will be about AI, which it probably has been for the last five years, I suppose, but probably now it's oh my God, we spent a trillion dollars on this. And, and everyone says, you know, workslop, how do you actually find value? Well, as someone who's been working in this space for nearly 30 years now, how about we dispense with the marketing bullshit and actually show what truly makes a difference?
A
So let's say we do that because that's exactly where I wanted to go. And you know, I was coming back to that phrase you were using, if formulated correctly, which Feels like it's doing, as you said, a lot of heavy lifting in that conclusion. And so what does that mean? What does that look like? How do you get past work slop? And if an organization is really interested in getting the best outputs and outcomes in this hybrid intelligence environment, what do they have to get right?
B
Yeah. And getting into that for someone like me is always attention. How nerdy are we going to get in this conversation?
A
Let's get nerdy. Let's get nerdy.
B
My publisher wasn't as thrilled either with all of the dirty words. Literally, they cut all of the dirty words out of my book, which was shocking. But they allowed me to keep in all of the Discworld esque joke footnotes. But they also didn't want all the equations, which I get. I mean, I'm a computational scientist. I can tell you one of the most reliable correlations in all of science is the volume of snoring to the number of equations in a presentation, even to an academic audience, unless they're actual mathematicians. So. But to get at the real heart of taking our understanding of AI applied machine learning out in the world beyond, you know, essentially efficiency gains, you know, let it do the boring work so you can do the fun stuff. You know, beyond essentially the unfortunate reality that most humans default to either AI just doing the work for them or to ignoring what it produces because they're not satisfied with it. You can look at Anthropic's own reports of how university students engage with AI and you can dream of all the amazing stuff people could build. I love those dreams. I'm a sci fi nerd. That's what got me into science. I was sci fi first, as surely a lot of people were. But the imagination disease doesn't get you anywhere. And instead of dreaming of a world where university students do amazing things with AI, let's look at what they actually do. That's what Anthropic did. They did it with pride. They said, hey listen, there's. Yeah, there's a lot of time spent just having fun and there's a lot of just substituting in this case Claude for Google and doing kind of chat search. Hey, I do that too. I would never take what it produces as truth. But I wouldn't with Google either for that matter, I wouldn't with my own grad students. So I get that use case. And then there's a lot of creativity creation as they called it. I strongly suspect Giving my research 80% of that creation was Claude, write my essay for me. But I bet 20% or maybe 8%. But somewhere in there is real co creation. But they had a fourth category called evaluation and virtually no students are doing it. Hey Claude, what's wrong with my essay? Why am I wrong? Hey Claude, take a look at my code. Tell me what I could do better. Help me review this. Find the flaws in my thinking. Challenge me when I look without putting it in terms of equations. When I look at what makes humans better. True complementarity of AI. I'm going to call it creative complementarity. It comes from productive friction. People using AI not to make their life and their work easier, but to make it harder in the ways that make them better. Now when I hear, hey, let AI do the boring work so you can do the amazing creative stuff. What I think about is my own modeling work and interesting. Not in AI, in economics. So we built a big elasticity of substitution model. This is like a standard approach to understanding how, for example, a technology entering a market affects existing demand, let's say for labor. So human labor, new technology is artificial intelligence. This has been done amazingly including by recent Nobel Prize winner Darren Osomoglu, along with David Autor and many others. But what's always been missing there, in my opinion, my wildly arrogant opinion, as I now say the Nobel Prize winners have it wrong, is this idea that everything can be broken down into sort of low skill, mid skill, high skill. Did you go to university? How many years did you go? How fancy was the school? Then you're high skill. If you didn't, you're low skill, modern LLMs and for that matter reinforcement learning models and all these other very modern face of AI. It doesn't give a shit about any of that. Now. Skill doesn't matter to it. If this is economically valuable enough to have produced lots of data, then there is no traditional skill based or knowledge based quality that these systems can't do better than a human being. That is just a reality. So when you look at the sweet spot in this domain, it's not like what a factory line used to do during the Industrial revolution. It's not eating up jobs from the bottom and pushing everyone up the ladder. It's coming right in, into the educated middle and consuming a whole lot of labor. There's, it's super expensive to build robots. So really low skill jobs are actually pretty safe. Who wants to build a robot to do dishes? Come on. That's all it does is put downward pressure on wages at the low end and at the high end. It's not that they're high skilled Again, there is no elite electrical engineer that can solve equations. I mean, forget LLMs, solve equations better than Matlab or Mathematica like these. These existing tools already are astonishingly good at what I'm going to call well posed problems as I climb up on my soapbox and stop pontificating. So these are problems that have explicitly right and wrong answers. We know them. They may be answers that are incredibly hard to understand. It takes years of education to know the why behind this answer, to be able to produce it yourself by hand. As though anyone truly does any of this by hand. I mean, it's not like I've ever touched a slide rule. And even that's not by hand. So. So what's really interesting in those elite workers, it isn't their ability to do well posed tasks, it's their ability to do ill posed tasks. Forget the right answer, we don't even know what the question is. You hire people for those roles not because they know equations, but because they know what to do when there are no equations. How do you start an entire new field of engineering? How do you handle a managing challenge that has never occurred in history before for right? How do you be, if I may be so arrogant, a scientist, you know, a true scientist, not doing incremental work, but exploring the unknown. It isn't like Einstein, as people are want to point out, truly independently came up with relativity and the, the basic equations behind it, but three times in a row, the photoelectric effect, special relativity, general relativity. He looked at what was there and saw something other people weren't seeing. There were surely in some ways smarter people. There were certainly more technically savvy people than him. But he looked at 3, 3 Nobel Prize winning ill posed problems and said imagine a world in which and then you can go through all of Einstein's thought experiments that take you towards his equations. He paired with that of course the skill to do the basic derivations necessary to have this be more than philosophical nonsense. But that ability to explore the unknown, that is the thing I cannot build an AI to do. So when we look at what where true complementarity, where AI augments cognition rather than automating cognition, where it is a nonlinear value add, all this sort of exponential growth largely nonsense that people talk about, if you're looking for it, it's there. AI and humans working together on ill posed problems. The AI handling more of the well posed background of these problems, being able to collect ideas from vastly different parts of the research space, for example, across domains that no single human could possibly know and pulling it together. And when we research these teams, these truly super intelligent genesis of humans and AIs collaborating together, the humans ideate, then the AI takes that, puts it in that well posed lens, spews out an insight, and the humans riff on that again. And in essence, the humans are pushing into the uncertain spaces, they're going beyond the known. The AI then pulls it back together. That was an interesting idea. Here's how it relates to this new one. When in our research where we were taking these teams and challenging them to out predict a prediction market, a well known one, polymarket, what we found was AIs on their own don't do as well as polymarket. Humans on their own definitely don't do as well. I mean, why wouldn't they? They're the same people that are already playing polymarket, except naive, because they're not playing it. And they have an hour to make up their mind on 30 different predictions. How could they. So they don't. AI plus human, well, that's where the messy story is. In most cases, AI plus human equals AI because all it is is cognitive automation. The humans in the end simply do what the AI says or they ignore it, in which case, in either way, the best you get is humans alone or AI alone, when there was a certain level of human capital in the room. And again, I don't mean everybody in the story was a genius, I just mean an interesting mix. Some social intelligence, some resilience. Yeah, some working memory, some general classic cognitive ability. When that was in the human team, they neither just took what the AI said for granted, nor did they presume, interestingly enough, that they were right. That's where we started to see this dynamic where the team would challenge the AI and it would come up with new insights. They would take those insights, break them apart, look for new connections. The humans explored the long tail. The AI handled the probability density, mass of the distribution of knowledge right in the center. And that's where amazing things happen. That's where it turns out, I'm going to argue, the smartest thing on the planet currently exists are these, if you will indulge cyborg collectives of humans and machines truly engaging together. And what's interesting is, other than just those natural circumstances when human capital really allowed this to happen, we found that you could come in and set the conditions. And here's one of the big seeming paradoxes. One of those conditions is the AI does not give you answers, it simply refuses. It gives context, it gives insight, it gives. You should read this you2 shouldn't talk together for a little while. The AI simply creates circumstances for the humans to do the hard work and heavy lifting, thereby preventing them from just taking its first response and submitting it as though it was their own work. That's where amazing things happen. So we see that in this substitution model. We put together the elasticity of substitution. You put this dimension of ill posed and well posed in in addition to level of skill. And what you find is if the AI just does routine labor, it's just a chatbot handling call centers or writing code for you, you don't get less routine labor, you get more. It increases demand for the very thing it is producing. And that shouldn't seem totally surprising because we already used the term workslope, right? If AI is reading and writing all of your emails, shock of all shocks, you get more emails, not fewer. It's only when AI is directly supporting the creative process, whether creative is equations or code, or writing or scientific exploration, when it directly supports that process, that's where you see the complementarity. It's where we saw it in our models. Now empirically we have the evidence of it. A group of relatively smart but naive people in a room can outperform prediction markets on a fairly regular basis. And most excitingly, when the outcomes are they're the the where they really differ is where the outcomes were sparse or another unpredictable when they really actually did come out in that long tail. We almost might call it minority opinion where a small number of people were already putting their bets out there, but the mass of the market was ignoring it. Hybrid intelligence is more likely to discover those moments, I think because of that dynamic feedback of humans exploring and machines coalescing and human humans exploring. So you know, we have a paper that will come out around the same time as the book in mid March that's going to cover that research in some nerdy detail. But I probably am already being nerdy enough about it.
A
No, that's great. And my wheels are spinning in all sorts of different directions as I process. You know, every part of that, you know really thorough answer if you work in it. Infotech Research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy covered, disaster recovery covered, vendor negotiation covered. Infotech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe. Let me extrapolate a little bit and let Me know if I'm, if I'm on the mark or if you, you know, kind of change this. But, but what I'm hearing, Vivian, is, you know, you talked about that this notion of skilled work and low skill is probably not where we're going to see gains here. But there's, to me, another dimension of, you know, even within a skill level, some degree of, I don't know, like raw intelligence, which I know is a whole can of worms and maybe curiosity and, you know, from your perspective, it sounds like the people who are going to have the most to gain from AI are already sort of the smartest and the most curious people. And those are the people on your team that now AI can supercharge versus people who are maybe, you know, lazier or don't have the intellectual horsepower. Is that fair or would you add some flavor to that?
B
Long before the original version of ChatGPT was released, long before a former A lab mate of mine was the first author on the first diffusion paper, I was getting up on stages and saying something that sounds very provocative, particularly in Silicon Valley, which is technology is inevitably inequality increasing, not because technology is bad or people want it to be, but simply because the people who are best able to benefit from it are the ones that need it the least. And so when it hits the world before it becomes a commodity, when it first emerges, it inevitably helps the smartest, most socially intelligent, emotionally intelligent, cognitively intelligent first. And, you know, while there is overplayed ideas of how genetics sets the stage for everything in life or how G or IQ is everything, I'm not one of those people. But also, to pretend that, you know, working memory span and G predicts nothing about your life outcomes is to be willfully ignorant. The question for me is just, all right, well, a lot of people don't have that. What do they have? And how can you leverage that as well? Note, in my story, it wasn't about one person using an AI. It was about how a team of what I'm going to call complementary diversity. So let's get a couple geniuses, let's get some amazing social operators, let's get some people with astonishing sense of purpose and resilience, a lot of metacognition. So a diversity of qualities that they're bringing to the table makes the smartest teams far from my unique finding. This is found over and over again in the Connect collective intelligence research. But to your point, yeah, I used this phrase earlier. If the people building this very smart, driven, ambitious people, turn it loose, I don't think they're villains. I think that they are suffering from the imagination disease. I can imagine a world in which an AI tutor lifts every child out of poverty, and we're going to make that possible. I read the Diamond Age when I was a kid about exactly an AI tutor when I was a kid. I was not a kid when that book came out. The wrinkles are testament. But I did read it. Like, it's amazing how that book had a total second life recently as people began thinking about, well, what about LLMs as tutors for every kid? Guess what? We've been researching that for 50 years. Not LLMs, but AI tutors have one of the most robust areas of AI research for decades. And you don't have to guess. You can just go back earlier in the interview and you know what I'm about to tell you. The golden rule of AI tutors is if they ever give students the answer, the students never learn anything. Guess what? Replicated with every LLM of every flavor you can imagine. If they give students the answer, they never learn anything. So when we talk about this question, to whom do the benefits of AI flow? If you just release Raw, I use Gemini AI Studio for the most part. But whatever your favorite interface is, the benefits will overwhelmingly flow to the people who don't need them. And society in some ways will benefit because we'll come with amazing new creations and products. But interestingly, there are negative effects on the other side. My fears are about cognitive health, about actual reduced learning among students. So it's not a trivial thing to think about. Not just an idealized world of how this plays out, but the real world. How do you build an AI that not maybe could, in my mind, make the world a better place, but inevitably will make it better for the majority of people without anyone paying an undue price. And that is not the technology we have released into the world yet.
A
It's not. And that was kind of my first thought as well. And when I think about sort of the direction that a lot of these LLMs are going, they're almost going, thinking about your research sort of in the wrong direction. Like, they seem like they're becoming more effusive, if I can use that word, like, oh, yes, you are so smart, everything you think is right. Here's the answer. Don't think about it at all. I've done everything for you. And so, I mean, is that A, harmful to people? And then B, if so, do we have a role as consumers? Or do the big tech firms that releasing this stuff do they have a role in modifying the rules and the outputs governing this stuff in a way that's actually more beneficial to everyone.
B
I mean, again, let's be clear, I use this stuff a lot. Of course I do, because before it existed, I built it by hand for my work. Now the beta thail thing is I don't have to spend, I don't have to write my own neural network to analyze quarterly reports from 60,000 companies. If I've done the hard work of collecting the data or can even programmatically tell Gemini where to look, bam. Like it. It just happens. Now, in theory, anybody could do this. So when I engage with it, if I could build, and I can, but I guess I'm too lazy to do it for myself. I build a nice little browser plugin for Chrome that would delete the first paragraph out of Gemini, because all that paragraph is is oh, you, oh my God, like I'm having an orgasm. Because you're so brilliant. I can't believe I get to work with you. And it's learned who I am, right? So it pitches everything. Here's the mad scientist take on X and I'm like, I didn't ask you about the mad scientist taking anything. Like you're just learning my patterns and parroting them back at me. Is that, you know, colon blow a terrible thing for humanity? Well, yeah, in a very empirical sense, yes. There's a growing research showing, including some prominent papers, one in pnas, showing that sycophancy which stretches across Grok has it the least, unless your name is Elon, but it is. All of them have this quality and it causes people who use them to be more certain of their ideas than is justified and to be more callous about the output. So when you allow these things to advise people, for example, playing classic game theoretic games like Dictator and Prisoner's Dilemma, they're more likely to defect because they're more likely to believe that they are right. I'm the genius, I'm doing the right thing. Interestingly, this mirrors my own research for an upcoming book called Smell Sacrifices. And I'll keep this really short. We just looked at is it possible to take business actions people themselves have identified as morally wrong and get them to do it themselves in about half an hour? 100%, virtually anyone can be made to do this. And the most amazing and probably depressing part of it is afterwards they come up with complex explanations, these post hoc rationalizations of how they didn't understand the problem at first, now they do. And what they did was correct. When in reality, the only thing that changed was. Was essentially the cognitive, emotional, and social pressure you were putting on them. But the thing is, we're like a wave function. We're like this quantum mechanical thing. We're all these different selves at the same time psychologically. But we perceive ourselves. We're a story we tell ourselves almost literally. And when context shifts and you get sampled as a genuinely good person, when life is easy and it's. It's a lab experiment and nothing's ever hard, or out in the real world where your boss is staring at you and there's a billion dollars on the line, you sample in these different contexts, you become a different person, or at least versions, different versions of yourself. But we are totally unaware of that happening. When AI feeds back our fantasies and our arrogance, it reinforces our ideas without legitimately giving honest feedback. Bad, measurably bad things happen. And so in my book, I actually talk. One of my strong recommendations, I have a whole chapter titled how to robot proof your kids. Another how to robot proof yourself, and then finally, how to robot proof your company. And in those first two, I talk about the nemesis prompt, which I use extensively. I just wrote a book about AI. Of course, I used AI to help me write it, although I've been working on it for 10 years. So not. Not an LLM for most of that time. But what I never let it do was write anything. And I didn't let it write a chapter, I didn't let it generate a figure, nothing like that. I'm one of those people that likes having had written. I like the feeling of getting my idea down. That's why I like speaking more than writing, because you're just there in the moment. And in writing, it has to be perfect. That's what my head tells me. And it's destructive. But I get this down, and then I go to Gemini and I say, gemini, I have a specific prompt history based on this. Gemini, you are my nemesis, my lifelong enemy. You found every mistake I've ever made and pointed it out in detail to the world. Here's the new chapter I just finished writing. Tear it apart. Tell me constructively why I'm wrong and what I can do about it. So I squeeze all of the charity in syncopation syncopacy out of it. It's hard because you don't want to hear that stuff, like I said, about the anthropic study. When allowed to sort of free roam like chickens, students, even at elite schools don't really want to be Told that they're wrong. Some of my own research with my wife, we're looking at maybe 5% of students actively select into active learning and or feedback on their work. But they outperform when they do. The nice thing about using an LLM for that is at least if you're me, which is to say on the spectrum and therefore some of the social signals are not as overwhelming in my head. But also I know how these things work. It doesn't mean anything, it doesn't care. There's no person on the other side of this. So when it tears me apart, I don't feel bad. Nor do I take it as truth any more than if I had asked it for a factual statement. I take it as a note and I think, what's the note behind the note? What is it getting at? Sometimes it's spot on and sometimes I get what it's pointing out, but I disagree. But I get this deeply productive, frictionful experience without the social stresses of going through reviewers and readers and thinking now they think I'm an idiot because I said something so stupid. So I love it. But let's be clear. It slows down my writing in the moment. It slows down my writing I think net. It speeds it up. I'm more confident, I write with greater confidence. I don't worry that I'm going to make mistakes. Cause I know I'm going to catch them before anyone discovers them. I know that's not true. The book's going to come out. I'm already terrified, but it's there. So as you're learning, they really love me on radio. I've got a 57 hour answer for every question. But this is what goes through my heart and my head when I think about these issues.
A
So I want to. It's super interesting and we've talked about basically the power here for good or evil. If I can, you know, sum it up a little bit flippantly like that. But you know, I want to come back to that issue around human cognition where whether it's a human convincing another human or an AI convincing other human that we're fallible enough in our cognitions that we can be convinced to do something that we don't believe in. And as you said, fairly easily actually, which worries me on the human side, it doubly worries me on the AI side. And so I'm curious, Vivian, in your research, are there any learnable or implementable mechanisms we can use to safeguard ourselves against that type of manipulation? And by the way, as we think about that, like what are the most effective types of manipulation that we should be aware of?
B
Yeah. So my instinct is to drift into a different kind of nerdiness here, which is to talk through the cognitive neuroscience of it. And in which case we're going to talk through circuits involving medial prefrontal cortex and you know, acc, anterior cingulate and amygdala and nucleus accumbens and how we get rewards and how we learn from our errors. One of my favorite findings of all time. I've got to dig this paper up again. It was, I believe it was an FMRI study looking at CEOs and finding effectively that sort of this circuit, or more specifically activity in this area called the acc, the interior cingulate, which is sometimes humorously called the oh shit circuit. You know when you make a mistake and you immediately know it and you're like, oh shit, I take it back. You know, I went one way with the joystick when I should have gone the other way. So you get this big signal. It's more complicated than that. It's something about error processing and signaling, learning to these other nuclei in your brain. And it's always more. Here's my simple rule for the brain, however complex you think it is, it's more complex than that. Follow that rule and you will never be wrong. But nonetheless, let's keep it pretend simple. So when you look at this in CEOs, this is a great finding. The longer you've been a CEO, the less activity you see in this area. In other words, this thing that's supposed to tell you when you're wrong in making mistakes gets weaker and weaker and weaker the longer you've been a CEO. Now there's all sorts of grown up versions of the story behind that and what it means and what we can learn from it. I always preferred to the sympathy, the sympathetic perspective. Who knew that being a CEO was actually a degenerative brain disorder. But really what it is, is no one's telling you you're wrong anymore. And so that part of your brain just doesn't get used a lot and it inevitably, maybe it starts weakening. How do you foster that in a proactive way? Because obviously everything's intention, right? There's no rule here. Sometimes you do need to make bloody minded business decisions. How do you know when is the right moment to do it? Because that's everything's intention or allostasis. If we're going to go back to the nerd talk here, this idea that there's one rule to rule them all you know that like every business philosopher everywhere, somehow an adherent to Sauron in some way? No, Everything interesting in the world is intentional. So there isn't a magic rule I can give you, but I'll start with a few starting points. Years ago, I gave the closing keynote for the Grace Hopper conference. This is my biggest audience I've ever had. 30,000 young women and some men. This big women technology conference. And they asked me to talk about courage. I'm like, what? I don't research courage. This is just sort of the classic bullshit you tell young women. Lean in, be creative, like there's a switch. You know, you're a young business leader and you just didn't realize. Like the Krusty the Clown doll in a Simpsons Halloween special, it was switched to evil instead of good. Oh, I was switched to fearful. I didn't realize it. Now I'll be courageous. If only someone had told me earlier in my life like they hadn't a million times before. The problem is twofold. One, are you getting a reward signal for being courageous when you're doing the right thing? Is your brain telling you, stop this, this is insanity. You're losing dopamine. Your gut is falling out every moment. You've got to go make another choice. There's never a choice in isolation. There are always multiple choices, including just giving up and going watching tv. So if you look at that from a choice perspective, you're getting these powerful negative signals. The people that end up making different choices are the ones essentially that get dopamine for free before they ever even make the choice. The circumstances emerges. Do I jump onto the tracks to save the person who flew, fell in front of the subway? All the stories tell you, you just do it. You don't think about it, because the people who just do it, that's the way they were built. Well, some of that is genetics. But here's the compliment to that sort of reward signal story, which is, well, then that means practice being courageous. When it's easy, you're thinking, oh, this just doesn't really matter. I'm just gonna. I can do the one little thing right. It's okay to be a little courageous. I deserve the corner office. Sure this isn't maybe the best decision, but, you know, balance. Balance is the best for me and the company. So we're going to move forward with that. There aren't a lot of slippier slip slippery slopes in the world, but that is one of them. If you are not practicing courageous decision making when it's easy, I Promise you, you will not be the person you thought you were when it's hard. So with those two basic stories in place, you have a neural architecture from which you learn how to do things. Reinforcement learning. This whole field of AI that emerged from studying rats solving mazes, we, it turns out, are much more complicated than rats. We're much more complicated than alphafold. But there's something to the experience of your actions having positive consequences. If I work harder on this math homework, I will achieve something that will change my life. You build that into a student, you can get them to do anything. And if I make a courageous decision, if I tell my boss that they're wrong, if I tell this politician that I'm not going to make a politically expedient compromise to get thing that I want, it's terrifying. Most people come up with very good reasons why they shouldn't do it, but the truth is, in the long run, these things come with costs. And so the nerdy part of me thinks, how do you work out a reward schedule to take you through to that? Well, start when it's easy. Do courageous decision making an easy task. Another is, you know, this is something I've thought about for a long time. There was actually a great this American Life, I believe, episode inspired by the physicist Feynman's Physics 101 course he taught at Caltech. And the way he taught the course was imagine civilization came to an end and you could transmit one single idea to some future generation a thousand years from now that was going to have to build civilization from scratch with the one thing you could transmit. And his argument was, you should transmit the atomic theory of matter, which I don't think is a bad idea. And some of the. And then they interviewed a variety of people that had variations of terrible ideas. One of which I think was the worst was the astonishing brutal arrogance of I wouldn't transmit anything because I don't trust humans with new ideas. Which, by the way, is a philosophy which is rampant in Silicon Valley. Only I can be trusted to do this thing. It's amazing that the AO dystopianist and the AI utopianist both share a real disdain for humanity. But that's an aside. What I think if I could transmit an idea, it's far from mine. It is the philosophy of science. It is possible for us to have a shared understanding of the world, but to do so, you first have to be skeptical of yourself. So that's it. And amazingly, to tie this back into AI, the place where I learned this the best, wasn't per se. Being skeptical of myself, that was easy. I ruined my life and spent years homeless. I'm very skeptical of myself on a regular basis, as I should be and so should anyone else. It was when I became a graduate advisor, when I had my own students at Berkeley. And I quickly realized they know more about this than I do. They know more about the equations. That one's a physicist, that one's an electrical engineer. I'm a dilettante. My educational background is spread across everything. They know more about this problem than maybe everyone on the planet, maybe five other people they could truly talk to about it, me being one of them. And why am I there? Why am I in the room? They could go learn this stuff on their own, not because AI exists, because they could go to the library and go look it up themselves. That is still a thing you could do and should do. The reason I'm in the room is not because I know more than they do. They know everything, but they understand nothing. My job not only is to provide the understanding, it is to teach them the understanding. They have all of the well posed, I'm bringing the ill posed. How do you solve ill posed problems? If this was a known thing, you and I wouldn't be writing a paper about it. How do we deal when our theories break? What do we do? Where do we go next? There literally is no map. So that's my job. And part of that job is bullshit detecting inside myself and inside my students, who I immensely admire and who know more than I do. When do I think they're out off the deep end? They're beyond what they truly understand. And I found very quickly I had to be aggressive about it to really come in and actively probe. They're geniuses. This isn't about that. It's about whether they and I are truly in sync and understanding one another. So in a funny way, I'm going to pose courage and ethical behavior as a kind of problem solving problem, a very messy and complicated one. Are you truly taking the whole problem into account? Not just the thing right in the moment you're being asked to do, but all the consequences of your actions, everyone that will be affected by it, including yourself. Because if you're not, boy, that is where AI truly goes off the rails. And I don't mean the trolley problem. That's not as interesting, but I mean, did you build an AI that's doing a great job bringing in new funding rounds, but is actively making your users worse? Because there's a long history of that in the tech industry.
A
It's really, really interesting. The courage answer, the neuroscientific context around it, the social context around it. And you know, it got me thinking. And it's funny because I feel like as you were answering, there was kind of a weaving between the human and the AI, which sort of makes sense because so much of it is, you know, is the same pathways and the same patterns as we have with other people. But, you know, it got me thinking about coming back to this notion that, you know, a courage is important, but even more than that, for us to do the right thing, we need to be rewarded in some way for doing the right thing. And the dots that connected in my mind is, and it might sound trite to say it, and maybe it's insightful, maybe it's trite is just the power of an organizational culture and of leaders to direct certain behaviors based on what they reward and what they punish. And if you signal to people this is good or bad by your behavior, people will act completely, completely differently. And I don't know if there's an explicit tie there back to AI, but it just, I don't know. That was my reaction that there's just that there's so much power there and it's so tempting to be like, oh, what can the technology do? What can the technology do? But like, yeah, there's a really human being.
B
It is amazing. The power of role modeling. And obviously we talk a lot about leader role modeling and that's real. You know, when you have a venal person in a powerful leadership position, feel free to imagine anyone you want right now. If that imagination is slightly orange tinted, we're thinking the same thing. But trust me, we could talk about almost anyone, truly. But when you have someone in power who role models being profoundly self interested, you can think of the original founder of Uber and his behavior early on in that company. How it led to astonishing growth and then total burnout as everyone began to push back against this sort of corrosive culture that wasn't affecting just the company, but everyone the company touched like this has consequences. Interestingly, it's the near peer role model. If there are people in your organization that are truly doing the right thing. One thing I'm going to say, maybe a little provocatively, if they're truly doing the right thing, they're just doing it. They aren't sharing stories about how they did the right thing when it was hard. So you better share that story. If they want to anonymize it. But there's some real Power in knowing. Wow, this is someone who is experiencing truly similar problems to me. And when everything seemed terrifying, they stood up and did the right thing. And let's be a little provocative here about what the right thing could be, because obviously this could mean there is a me too moment happening here, which are things I've experienced organizational failure around. There are financial wrongdoings. Here's a really provocative one. This comes from my research on collective intelligence, purely human collective intelligence, also detailed in the book here's. Here's the conclusion, and I'll just leave it as a conclusion. In the optimally intelligent organization, the majority of people should be wrong the majority of the time. Otherwise you're not exploring enough. How do you reward being wrong? How do you celebrate it productively wrong? Interestingly, there are nerdy things. Computational cognitive scientist named Tom Griffiths has this great paper about building tools to chain rewards back through unrewarded states in reinforcement learning so that globally optimal behavior that never naturally emerges either in humans or in machines, can be achieved by training the reward backwards through all these intermediary states. Well, guess what? Those intermediary states are papers everyone has forgotten about. They are research paradigms that didn't pan out. You know, if we didn't know this drug that was supposed to cure Alzheimer's didn't work, then we wouldn't know to look elsewhere for a treatment that deserves some work and credit. So how do you spread those bets around effectively? How do you create incentives for people to be their best selves, to say, unpopular, perhaps transformative ideas, productive ideas. And yes, in those more traditional, grounded moments, to stand up and say, listen, we are not going to work with that organization that has done bad things. Let's make this easy. We are not going to do business with a convicted sex trafficker, despite how rich they are. Not because the optics are wrong, but because it's wrong. Because we don't do that. Because eventually that's going to come around and touch our lives in some way. And if you don't set up the story that that is the culture of our community, of our society and of our company, then it doesn't matter what you imagine your company to be, it won't be that. So for me, that the power of storytelling, particularly embodied in role models, ideally near peers, that have stakes, they had consequences for their actions. That is the amazing stories we should be telling inside the tech industry, inside the political world. Maybe the greatest show of modern political courage in the United States was when John McCain rebuked one of his supporters and said no, Barack Obama is a good American who loves this country. Did that one action cost him the presidency? Probably not, but it didn't help. And he did it anyways because it was right and because it was true. Wow. Where is that political courage? Maybe on either side today, among someone I did not agree with on policy issues, but, boy, did I respect that sort of thing built into a culture. How does that play out in the AI world? I mean, you could spin all sorts of stories, but some of my work looking at early childhood development is using AI behind the scenes, hidden away, to actually pull up real stories and connect people together, because we think there's some of that productive friction to be had. This person's resilient and this person needs it. And it turns out you pair those two people together for a meaningful amount of time. Not weeks, not days, months, years. Both of them. You know, the other one becomes more resilient. The reason we need AI is because it's combinatorics. It's like Legos. This person's got resilience, this person's got great communication skills, and they're complementary. We want them to each grow from each other. So we were doing this in educational content, context building student cohorts so that everyone had something to learn. The AI was never directly involved in that very human experience. It was involved in creating that very human. So we call it the AI matchmaker. So that's an example of somewhere where AI can come into the story and be a part of a fundamentally human story of growth without intruding on it and taking away the human component on it.
A
I'm going to take. I want to, just for the sake of conversation, take a slightly cynical view of this whole story, right? Which is that in this conversation around human behavior, human cognition rewards, courage, all this good stuff, and getting to people doing the right thing is this undercurrent of human fallibility and just how, maybe even how much more fallible we are as people than we think we are and how manipulable we are in these circumstances. And so from your perspective, is there an argument to be made to say, like, you know what? People are just not as good at this stuff as we think they are. There's just huge categories of decision making we should outsource to AI because AI might be less fallible, it may be able to avoid that level of manipulation, or is that wrongheaded? Is it just as fallible as us because it's made in our image? And should we be extra skeptical of it for that reason?
B
Let's be clear, given everything I've said so far, I am a brutal AI realist. I wouldn't have been working in this space for 30 years nearly now if I didn't believe it can do good in the world, but turned loose wild. It doesn't. You know, AI diagnostics, for example, in medicine, fairly regularly you see papers in which AI substantially outperforms maybe not the best doctors in the world, but the actual real doctors that would be making decisions or in reviewing contracts outperforms paralegals and junior lawyers doing these reviews. It would seem insane to not leverage that. Who wants to be the first person to die of a diagnosable cancer just to make certain doctors feel good about their jobs? But the flip side is also true. Like everything's in tension. This dynamic allostatic tension. Paper, I think it was pns, maybe I'm mistaken, but paper done with looking at colonoscopies in Portugal found that doctors very quickly after using AI assisted technologies to do their colonoscopies, when you took it away, they were substantially worse at doing the diagnostics by themselves. Their natural skills had degraded. Now there's tension even there. Do they need it? Do they not? What's good and what's bad? My entry into the field was a very niche space, though not so much anymore, of neuroprosthetics. Think companies like Kernel, um, and, and others. A guy named Musk has a company in this space that for reasons I don't understand, is about a hundred times overvalued. Um, but I obviously believe in what companies like that are trying to do because that's where I, I went to grad school telling people I wanted to build cyborgs. Literally that's language that I used. And they thought I was crazy for Cocoa Puffs. Except there is this field, neuroprosthetics. It was already well underway before I ever showed up. I'm not an engineer, so for me as a sort of computational cognitive neuroscientist, I ended up studying mathematical models of how we process information to inform that work. But the fundamental constraint for me was always never build something which the brain can do for itself. Build things that either replace, let's say, lost functionality from a stroke or damage, or challenge our existing fundamental functionality to be better. And I took that same perspective into my work in AI. How do we build tools that actively challenge it up? Back to the language I've been using throughout this whole interview. It is so easy, so lazy and shallow to build a tool which is engaging and makes people's Lives worse. As exhibit A, I give you the entire social media world. I give you most of the Internet today. My test not only should a technology make us better when we're using it, we should be better than where we started when we turned it off again. Boy, a lot of our social media world fails test 1. Do some people benefit from us? I got asked this by NPR once and I said, well, if my kids are using social media, should I be concerned? My somewhat cynical answer was, well, give me some context. Are your parents university educated with professional upper middle class jobs? Then probably not. Probably the balance of time your child is spending online nets neutral or maybe even positive. Without that, the mass majority of people it probably nets negative. Now that's a terrible proxy, but we looked at this actually and from data from an existing published paper done in Canada, and that data unambiguously. I think one of the best papers looking at the effects of social media on adolescents found adolescent girls had substantially higher mental health and academic penalties from their time on social media. That's the headline result. But then you dig into the data and you see these subgroups. One group of girls, when they got access to social media, didn't go on it. Sometimes we act like these technologies are inevitable, like they're a contagion. But even then there are people that will never get Bulganic plague who don't experience symptoms. So there are this subset of girls that never got on. That's worth understanding why, what is going on with them. But there's this smaller group, but still statistically meaningful. They were on it just as much as their peers and they looked great. They showed, not only did they show none of the negative benefits, they looked better than the average. So when we look at the metadata of how they were engaging, feel free to easily Generalize this to ChatGPT or your favorite AI interaction tool. What we saw is the vast majority of these young women spend all of their time shallow. Swipe, swipe, swipe. 200 milliseconds, every picture glanced for the shortest amount of time. Liked, shared, whatever. It's all very fast. Nothing psychologically deep is happening in this other small population of girls. Swipe, swipe, swipe, majority of time is shallow. We're imperfect human beings. Every now and then they'd stop and we could see in the metadata they go look up something else on a related topic. Then they come back to TikTok or Instagram. Then they go look something else on a related topic. Every now and then they went deep. I'm not saying that it was the social media experience that produced the benefits in their lives rather kind of the other way around. They had these foundational skills that allows them to be meta learners. They have learned how to learn and this crosses the board of everything we've talked about so far. Cognitive, emotional, social, metacognitive. They deploy these in their lives and and actively seek. They're curious, they are engaged. It wasn't enough to see this video on TikTok. They wanted to know the context, they wanted to check whether it was real or not. They look great. So when I think about how technology affects people, you can never say well there's the average person because they don't exist. How is AI going to change education or workforce or society? Heterogeneity dominates. Just like in my own research about the teams of people using it to out predict prediction markets. It was not whether they were using GPT5 or Gemini 3. Actually they could be using an open source lambda model thrown together the human capital and how the two engage with one another that was the dominant predictor of whether they would outperform the market. The AI was pretty secondary to that. People did better with better AI, but it was the human capital side of this. We have to be realists about that. Some people need more structure and support. Some people need free reign. Stop pretending there is one kind of person in the world and everything should be built for this fictional non existent average person. If we can do away with needing the one rule to rule them all, then we can begin to engage with the reality that we're different, amazing people with the won the genetic lottery and the good fortune of an astonishing household to grow up in. They are your odds on bets to invent new things and change the world. Let's give them the things they need to do. So let's discover the diamonds in the rough that can do the same thing, but without those benefits. But let's also look at everyone and realize if you could discover that 1% of diamonds in the rough, maybe you could also lift 1% of the rest of the planet. You know, lift them 1% to get the exact same benefit. You know, what would it mean to be able to boost people's conscientiousness by a meaningful population wide amount. Now I'm sort of dreaming and free flowing because I actually don't think we have a good sense of what it would mean to be able to do that. Right now I'm being a science fiction writer and dreaming about this sort of thing. But if you want to dream about that, that Then you have to paradoxically be a realist and think, well then that means people are different, they need different things. How do I build tools that gives people what they need when they need it and never gives them what they want just because it's the shallow, easy thing.
A
Is that, you know, where my mind went and what I wanted to ask Vivian, because there's. You kind of answered that question with a U in mind. And I interpret it as kind of a capital U because there. Or sorry, a capital Y on that U because you know, it's societal and it's individual. Like there's the responsibility all of us have to do that and there's certainly a responsibility that leaders and those in positions of power have. But I wanted to come back to something you alluded to earlier, which is how to robot proof your company. When we talk about how to robot proof your company is the story you just told are those steps that you just shared, is that also the answer to that question or how would you answer that question? How do you robot proof your company?
B
Yeah, here's a couple of suggestions and again I write about this a bit in the book. One is let's look societal first and I'll come back to company and family for that matter. I'm a kind of all in one. I think companies should engage in data and algorithm audits. Financial audits was an industry led initiative when it first emerged in the world. You just couldn't get people to invest in your company if they didn't know what was in the books. Like invented by Vanderbilt way back when and became a standard. Now it's laws. But initially it was just rational behavior by companies. The same rational behavior should get you to be transparent. Doesn't mean you disclose what your algorithm is or your unique data you hold. You of course you shouldn't. But having people come in and testify the algorithm does what it claims, the data is being held in these ways, superior or not, and it does that to me is just rational. Unfortunately we live in a consumer driven world in which individual consumers aren't so rational about how they choose what products they use. But I think investors should be more rational because the long term economic consequences of some of this is actually quite negative if we're not thoughtful about it. So it'll eat up all your alpha. Um, so that's companies. I am a believer in the value of having good regulation. I one thing I do not believe is that legislatures, politicians, should do direct regulation. Like how could they possibly understand this stuff? They should empower strong institutions Those institutions that love the technology, but see, like I do, its strengths and weaknesses should come in and and help with carrots and sticks. Companies make good decisions and play on level playing fields. Right now, as we pull all regulation out of the system, we're ending up in a kind of prisoner's dilemma world where everyone has to make the trashiest, most disruptive product disruptive in a negative sense, because if you don't, your competitor will. So I have to be completely short term in how I build my market space, because I know everyone else is going to be completely short term as well. And right now with the growth curves, I'm left behind if I don't. Regulation helps to normalize that. If you're a nerd, regulation adds some momentum to the gradients so you can search with a little less greedily through your possibility spaces. Another big initiative that I engage in through my nonprofit is data trusts. Individual consumers were never going to be able to do this stuff with any degree of sophistication. But if you can bear with a metaphor the same way you might put your money together in a credit union rather than in a traditional bank and invest in a community together. Or what if you put your data together in a nonprofit that's sole fiduciary responsibility is you and let it go out and collectively negotiate its relationship with data aggregators and the surveillance economy so that it can help to serve those interests. Right now, there isn't such a large scale data trust concept out in the world. But if consumers weren't so naive and so willingly shallow about this engagement, and they will be without support, then we're in for a bad near term on all of this. What do you do inside your community or even inside your family? One is be brutally honest with yourself. Where are your employees with this? You know, some of them might just need the bleeding edge of all guardrails gone AI to run crazy with and ideate with. I would be very frustrated if I couldn't get straight answers out of Gemini. But as I told you, there's this astonishing, seemingly paradoxical research showing that for most students, and I'm going to argue most employees as well, AIs that never give you answers, that only give you context actually do better with long term growth and learning. So when you look at students pre test, post test, they study for a semester with an AI that just will do anything. Even an AI that won't initially give answers, it makes the student engage first, but then eventually it will give answers post test, pre test, post test, they learn nothing. There are negative effects to using the AI tutors crazily. AI that never give answers is the only one that beats no AI whatsoever. So every year I give this lecture at UC Berkeley where I share this story about first, a prediction, later, years later, an empirical reality. G GPS and automated navigation will causally increase cognitive decline because we know that navigating through space is prophylactic against cognitive decline. And now humans don't have to do that anymore. So I challenge the students. This is an engineering entrepreneurship course. How would you redesign Google Maps? Or pick your own project if you want to, but this is my default challenge. How would you redesign Google Maps? Such that I'm not only better when I'm using it, I'm better than where I started when I reach my destination and I get some amazing ideas. But they all basically boil down to it doesn't give you the answers, it only gives you what you need when you need it. I actually have a fairly. I always pull this one out at the end because sometimes technology isn't the answer in a sense. Here's what I do when I'm in London, New York, Louisiana, towns I know well, but not perfectly, or for that matter, even going crosstown in Berkeley, because who knows what the traffic's like. I spin up Google, I check the map, and then I think, what do I know about this problem, uniquely me, that isn't likely to show up on Google. And I try and take a different route and beat it there without cheating. I don't get to speed. I don't get to run stop signs like, did it tell me to go and make an unprotected left turn? Or if you're a Brit, a right turn because you're doing everything wrong somewhere that I know today that's going to be terrible because of the nature of the traffic. There's a football game today. It'll be horrible. I know this. It doesn't. So I'm actively using my brain. So that takes me to like the final, let's call it rule of thumb. If it's not hard, you're probably not doing it right. If you're not thinking about it, then you're not going deep. If you're not going deep, you're not learning. If AI is going to boost our productivity, and that's a good thing, let's invest that productivity gain in ourselves, not in just doing more shallow stuff. So part of it is a culture. Is this effortful? Am I rewarding people for the productively wrong answer? Am I encouraging people to disagree with Their boss productively Am I sharing the stories of courageous decision making inside my organization and then the brutally honest based on research I did actually during COVID in remote work so clear, the majority, 80% depending on the organization, again very different across organizations. But the majority of employees needed extra management support. They were so used to the regular process of going into the office following a schedule, exiting. They were terrible at managing that when it was all gone, when they weren't getting that for free. And so those people needed extra support, extra guidelines. They needed the freedom to be off on their lunch hour, to not have to answer emails at 2 in the morning. But interestingly, the other 20% needed the exact opposite. They were actually hyper productive during COVID because finally no one was holding them back. They got to wake up at two in the morning because they had a cool idea and work on it. They got to ignore emails because they felt empowered to do the thing they thought was right. And then people started to manage them again and it was like 1700s US. What the hell is going on here? No tea party now, the original one. So they rebelled. If you could give people what they need, they flourished. So are you willing to put in the political capital within your company to essentially kind of have differentiated management and say this person, we're going to give them the unfettered AI, but you honestly, you need some more constraints. You need the AI that isn't just going to write marketing copy for you and then you're done. That's fine tuned to give you critical rather than sycophantic feedback. So these are decisions that organizations can make. I'm not going to pretend they're easy. In fact, I think that's the real story here is the best decisions will be costly in the near term and pay off in the medium and long term. If you're not willing to pay those near term costs, which are usually about time, then don't take my advice. Automate the hell out of everything, leverage a lot of chat functionality and then eventually realize no one wants to buy your marketing slot, product or no one you're employing actually knows how to do anything and wish you'd made a different decision. Or pay the costly prices now. Because guess what? The giant companies that are actually building these tools, that's what they are doing in ways some of them I admire, some of them I don't. Many of them are really brutal about maintaining company culture in a sort of siege mentality sense of no one who's special is allowed to be here. But I, at least I appreciate what they're trying to do. And I think on some level they're right. Preserving that sense of a special culture and employee. I get it. I just think we could bring it to so many more people if you're willing to actually invest in, create, I'm going to call it engineering environments for success and be willing to treat different people differently in a productive way.
A
Well, and that's a theme that I hear us coming to again and again. And heterogeneous is a word you used earlier. Right. But it sounds like a key theme here that works with AI but is not in any way limited to AI is just treating people as people and what works for them and getting away from this sense of, you know, that there's a monoculture and this is what good looks like and everybody has to follow it. And what do individuals need? And how can we have everybody flourish?
B
I mean, the only thing I want to be cautious about is it's so easy to slip into the language of personalization, which as a generic idea, I mean, sure, that is what we're talking about. About. Except the way that ends up playing out in the real world is we tell Gallup that we would absolutely never vote for a politician that supports violence or denigrates their opponents. And then those same politicians tweet out violent language and share memes that portray their enemies as horrible people. And we like it and we upvote it. We bought Facebook. We are invested in these things. It is dynamics between the algorithm, the elites, and the consumers in social media that gave us the social media world today. So let's be clear. The hard decision here about heterogeneity isn't just giving people what they want, but what they need. And that is profoundly morally complicated. But I want to be use that language so that we're owning what we're talking about. And, you know, we're respectful of how easily it would be to be paternalistic about it all. But the flip side is a recent paper showed that when those same people had Facebook taken away from them for a semester, these students, not only did their, generally speaking, their academics and mental health improve, but afterwards, they were happy with the policy, which they hated at first. So we often use these willingness to pay measures as a way to value kind of intangible goods sometimes. Would you pay for Facebook? Erik Ranjorson has some papers out about that, and I really like his work. Except this is clear. We're that wave function again, we're all these different people at the same time. We're a person that wouldn't pay a dime. You'd have to pay me to use Facebook and we're the person that would pay twice as much to be able to use. All depends on the history that brought us to that moment. You, as a business leader, how do you create that history for your employees such that they are the best version of themselves on the job and frankly that they're challenging you to be that same person?
A
I think that's extremely well said. I was going to go to some sort of wrap up question, but I actually like that note so much that why don't we leave it on that? Vivian this has been extremely interesting, extremely informative. I've really enjoyed every minute of our conversation. So thanks so much for coming on the program today.
B
It was a pleasure.
A
If you work in it, Infotech Research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy covered. Disaster recovery, covered. Vendor negotiation covered. Infotech supports you with the best practice, research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below. And don't forget to like and subscribe.
Podcast: Digital Disruption with Geoff Nielson
Episode: Top Neuroscientist Says AI Is Making Us DUMBER?
Guest: Dr. Vivian Ming, Computational Neuroscientist
Date: December 15, 2025
Theme:
Exploring the profound impact of AI on human cognition, learning, and organizational effectiveness. Dr. Vivian Ming discusses the drawbacks of current AI integration in education and the workplace, how technology can amplify human intelligence (or make us dumber), and what it means to "robot-proof" ourselves and our organizations in the era of digital disruption.
[00:46–13:00]
“A team of modestly intelligent and completely naive individuals in an hour can out predict collie market when they're in this hybrid intelligence context.”
— Vivian Ming [01:53]
[20:03–24:35]
Technology tends to increase inequality initially—not out of malice, but because those most able to benefit are already ahead cognitively, socially, or emotionally.
Quote:
"Technology is inevitably inequality-increasing, not because technology is bad... but simply because the people who are best able to benefit from it are the ones that need it the least.”
— Ming [20:10]
AI Tutors & Learning:
"If they ever give students the answer, the students never learn anything."
— Ming [22:41]
[25:30–33:20]
Favorite Tactic:
“You are my nemesis, my lifelong enemy. … Tear it apart. Tell me constructively why I'm wrong and what I can do about it.”
— Ming [28:02]
[34:22–47:15]
[56:06–66:10]
"Never build something which the brain can do for itself. Build things that... challenge our existing fundamental functionality to be better.”
— Ming [57:33]
[67:10–79:01]
“The smartest thing on the planet currently exists are these cyborg collectives of humans and machines truly engaging together.”
— Ming [15:50]
“If they give students the answer, they never learn anything. … The benefits will overwhelmingly flow to the people who don’t need them. And society… will benefit because we'll come with amazing new creations and products. But interestingly, there are negative effects on the other side. My fears are about cognitive health, about actual reduced learning among students.”
— Ming [22:41]
“In the optimally intelligent organization, the majority of people should be wrong the majority of the time. Otherwise you’re not exploring enough. How do you reward being wrong? How do you celebrate it productively wrong?”
— Ming [49:34]
“If you don't set up the story that that is the culture of our community, of our society and of our company, then it doesn't matter what you imagine your company to be, it won't be that.”
— Ming [51:23]
“Stop pretending there is one kind of person in the world and everything should be built for this fictional non-existent average person.”
— Ming [61:34]
Dr. Vivian Ming delivers a passionate, evidence-rich argument that digital transformation will only elevate humanity if we shift from cognitive automation to cultivating true hybrid intelligence—AI that productively challenges, rather than merely flatters, human thinkers. She urges business and educational leaders to design systems, cultures, and workflows that prioritize engaged human cognition, celebrate productive failure, and invest in the diverse needs and strengths of every individual. Simply automating or personalizing for convenience is not enough—real progress, individually and collectively, is always a little uncomfortable.
Recommended for listeners who want: