
AI pioneer, Dr. Richard Wallace reflects on chatbot evolution, from creating ALICE and AIML to modern AI’s neural-symbolic fusion.
Loading summary
Podcast Announcer
You're listening to tip.
Preston Pysh
Hey, everyone. Welcome to this Wednesday's release of Infinite Tech. Today's episode is a deep dive into the early foundations of conversational AI and what they reveal about today's language models. My guest is Dr. Richard Wallace, a pioneering chatbot creator and three time Lobner Prize winner, best known for building Alice and the AIML language that powered early conversational systems. The Lobner Prize was an annual competition designed to implement Alan Turing's Imitation game, awarding the chatbot that could most convincingly carry on a human like text conversation with judges, just as an FYI. So during the show, we talk about why simplicity beats scale in the early AI race. How supervised rule based systems differ from modern LLMs, what the Turing test actually misses, and why combining symbolic reasoning with neural networks may matter more than raw model size. This is surely an episode you will not want to miss. So without further ado, let's jump right into the conversation.
Podcast Announcer
You're listening to Infinite Tech by the Investors Podcast Network. Hosted by Preston Pysh. We explore Bitcoin, AI, robotics, longevity and other exponential technologies through a lens of abundance and sound money. Join us as we connect the breakthroughs shaping the next decade and beyond, empowering you to harness the future today. And now, here's your host, Preston Pish.
Preston Pysh
Hey, everyone. Welcome to the show. I'm here with Richard Wallace and wow, this is really exciting for me to talk to such a pioneer in this space, in the chatbot AI space. And first of all, welcome to the show. Excited to have you here.
Dr. Richard Wallace
Thank you. Thank you, Preston. It's a pleasure to be here as well.
Preston Pysh
So where I want to start is I'm just super curious how people kind of fall into, you know, their field of expertise. And when I look at what you accomplished very early on back in the 1990s, I'm curious what drove you or motivated you to be paying attention to chatbots and the Turing test and all of that. It's such an early phase because I think for most of the listeners, they know that all this stuff has really come to fruition in the last five or 10 years. It's got on everybody's radar. But you were doing this literally decades before anybody was even aware of these ideas of chat bots and whatnot. So what was your initial motivation to get into this kind of stuff?
Dr. Richard Wallace
That's absolutely right. You know, I like to say that nobody knew what artificial intelligence was until a couple of years ago. Yeah, and now I'll be sitting in a restaurant somewhere and I'll hear a conversation at the table next to me, and they're talking about AI. Well, anyway, there are several threads that came together that inspired me to work on the chatbot, Alice, and I'll just pull on a couple of those threads here. One is that in 1990, I read an article in the New York Times about the first Loebner Prize Contest. Now, the Loebner Prize was an annual Turing Test, an annual contest based on the Turing Test, funded by a rather eccentric philanthropist, Hugh Loebner. And the story with the very first contest was that none of the programs competing came close to passing the Turing Test. They were all just terrible chatbots. But Loebner awarded a bronze medal every year to the chatbot that was ranked highest by the judges in terms of being the most human. And that first year, the bot that won was simply based on the old Eliza psychiatrist program, which, if you're familiar with that, was a very primitive chatbot developed by Joseph Weizenbaum in 1966. And it had very few responses, but it had some clever tricks to could sort of match keywords in the input, and it had canned responses associated with those keywords. It could invert prepositions. So, you know, if I said, I came here to talk to you, then it would repeat back, you came here to talk to me. So it did that sort of pronoun swapping trick. But when I was in graduate school in the 1980s, this Eliza program was basically considered kind of a dead end, or at best, kind of a hoax. Yeah. In AI. And not only that, the. The inventor, Joseph Weizenbaum, ended up pulling the plug on it because he thought it was too dangerous. He thought that people were reading too much into it than was actually there. It was a psychiatrist program, so people were trusting it with their, you know, their personal issues and problems. They were surprised to find out that Weizenbaum could read all the transcripts of their conversations.
Preston Pysh
Wow. And.
Dr. Richard Wallace
And so he wrote a whole book after that, Computer Power and Human Reason, where he criticized the whole field of AI and his Eliza program in particular. You know, it's really hard to imagine this now, that someone would come up with a new AI application that's very engaging and popular, people are using it. Then they would say, oh, no, this is too dangerous. We have to put the genie back in the bottle. I think most people now would run out and try to find venture capital to start a company to commercialize. True.
Preston Pysh
Very true. But isn't this fascinating that the thing that he discovered very early on in the 90s, he started playing around with this in the 60s, which is mind blowing to me. But what he found in the 90s was that there was a huge centralization concern with privacy and what people were putting into these discussions, which is now, you know, a major talking point with AI. And it doesn't seem that, I know I'm generalizing here, but it doesn't seem like the population really cares too much or even thinks about these issues that caused him to shut down his, his entire effort behind this. I don't know. I find that really fascinating that he discovered this, what, four decades before it became like three decades before it became something that the rest of the world should be very concerned about.
Dr. Richard Wallace
Well, he really discovered it in the 1960s. Wow. When he first created the program. The other ironic thing about ELIZA was that up until very recently, I would say, well, let's say 20 years ago, Eliza was by far the most widely distributed, popular, and well known AI application.
Preston Pysh
Yeah.
Dr. Richard Wallace
If you knew anything about AI up until maybe the year 2000, then you would know about Eliza.
Preston Pysh
I'm curious, when you read this, I think you said New York Times article in 1990, did you ever think that you would be the winner of this Lobner Prize a decade later?
Dr. Richard Wallace
Well, that planted a seed in my mind and I didn't really do anything about it for about five years.
Preston Pysh
Okay.
Dr. Richard Wallace
So another thread that led to the development of Alice, or the inspiration for Alice, was around that time in the early 90s. It was the end of the Cold War. And so there was decreased amount of government funding available for AI and robotics research compared to the 1980s. And so a number of us in the robotics field, I was working in robotics at the time, got interested in the idea of minimalism, robot minimalism. And basically that was the idea that we could build robots with very simple, inexpensive sensors and actuators, you know, very commodity microprocessors. And as a result of that, you could actually get more lifelike behavior out of these robots than you could with, you know, approaches people have tried in the past with much larger computers and so forth. One of the interesting inventions that came out of that period was the Roomba.
Sponsor Voice
Yeah.
Dr. Richard Wallace
If you think of the Roomba rolling around and, you know, bumping into things and changing its direction, it's all basically just a stimulus response, application. We call that stateless. So it's sensing something and then taking an action based on what it's sensing. Changing direction, for example.
Preston Pysh
Yeah.
Dr. Richard Wallace
So that that whole approach of minimalism was also in my mind at the time. And that kind of dovetailed with the. The very simple approach of the Eliza program, which was also kind of a stimulus response. You know, it was so simple that it could respond very quickly. It didn't have to go and do a lot of computations to come up with the responses.
Preston Pysh
What was your inspiration for thinking that simplicity was going to lead you to better results? Was there something in your life or something that you were reading at the time or, you know, what drove you to that? Intuition.
Dr. Richard Wallace
Like I said, we were working on the minimalist philosophy of robotics.
Preston Pysh
Oh, okay.
Dr. Richard Wallace
Yeah. At that time, I was working on the development of a robot eye. And by that I mean a visual sensor that's based on the architecture of the human eye. So the human eye differs from a TV camera in the sense that a TV camera is basically a square grid of square pixels, but the human eye is more like concentric rings of pixels with higher and higher resolution towards the center. We call that a log map. And so we developed a sensor that had that log map, log map, pixel organization. In order to use a camera like that effectively, you have to be able to point it. So we developed a little motor, high speed pointing motor, based on a direct drive design. And that motor could point the camera, the eye camera, and pan and tilt directions very, very quickly. And again, it was a very simple kind of actuator, simple sensors, and it could move very quickly, could move actually faster than the human eye. So you'd sort of see this thing whipping around and looking at different things, and it was very lifelike.
Preston Pysh
So just for the audience to understand. So in 2000, I believe, 2001 and 2004, Dr. Wallace won the Lobner Prize, which is this Turing test, with his ALICE protocol or Chatbot that he had created. And I guess for me, what was the major insight that you think that you had back then? You talk about this idea of simplicity, but what would you say was the major insight that you had to outperform everybody else that was competing on what is, I mean, for anybody listening, the most complex, challenging, you know, problem you could ever try to go after. Right. Like, what would you say was your keen insight that you had that allowed you to do this?
Dr. Richard Wallace
Well, it was basically the idea that I could build on the Eliza program. So the Eliza program had about 200 rules. 200, you know, stimulus response rules. And you can think of that as a pattern and a response. And my idea was to kind of, was to build kind of a super Eliza, where instead of 200 rules, you had thousands and thousands of rules.
Preston Pysh
Yeah.
Dr. Richard Wallace
And in fact, I By the time I was entering those contests, I got Alice up to about 50,000 patterns and responses.
Preston Pysh
Wow. Amazing. So, Richard, one of the things that I found really fascinating about you back at this time was that you came up with this artificial intelligence markup language. You effectively, for all intents and purposes, and correct me if I'm mischaracterizing this, you had to come up with your own language in order to kind of build efficiency into how this chatbot was working, which is, you know, as a person who's not very good with languages, I'm much more of a math person. I'm reading this and I'm thinking, this is mind blowing. So talk to us about this. And what was this insight that you had to come up with the AI ML artificial intelligence markup language at the time that you did this?
Dr. Richard Wallace
Well, AIML is based on xml, and XML was very popular at the time. One thing that appealed to me about XML for the purpose of writing chatbots was that I always say XML has an implicit print statement. So when you write the responses, you don't have to put in an expression that says print, blah, blah, blah, something, you know, between the parentheses, because the XML already just provides the text inside the markup. So the response is just the text inside the markup and then basic unit of knowledge. In aiml, I call the category, which is like the rules I was talking about a second ago. So the category consists of a pattern that matches some input, some natural language input, and then a response called the template. The reason it's called the template is because it's not exactly the answer, but it's a template for the answer that can be populated with various other things. And then there was also a recursive element to it where the response could actually simplify the input into a kind of simpler input. So the example of that is I want you to tell me who you are right now. So you can reduce that by removing the right now. So I want you to tell me who you are, and then you can remove the I want you just so it reduces to just tell me who you are. And then that reduces to who are you? So there was that recursive element built into the responses as well.
Preston Pysh
So in general, you're just, you were taking language and you were making it way more efficient. And they're like, where do you even start with something like that? I mean, you'd literally have to go through. There's just so many different variations of language. And I think of the complexity of this. I wouldn't even know where to begin to start writing something that makes it more efficient. Like.
Dr. Richard Wallace
Right.
Preston Pysh
Yeah. So how did you, how did you think about solving that problem?
Dr. Richard Wallace
Well, it all goes back to the conversation logs. So just like Wizenbaum, you know, I can read the transcripts of conversations people were having. By the way, this would have never worked without the Internet, without the worldwide web. Yeah, because with the world Wide web, I could start to accumulate conversations from a very large audience of people. And by looking at the transcripts of those conversations, I could basically program responses to the things people were saying. Later on I realized that there was kind of zip distribution over the things people were saying. So, you know, there's kind of a most common thing people say, which is hello and then who are you and how are you? And I like something so you can create the responses in order of how frequently people say particular things.
Preston Pysh
Let's take a quick break and hear from today's sponsors. Have you ever been interested in mining bitcoin? As a miner myself, I've been using simple mining for the past few months and the experience has been nothing short of seamless. I mine with the pool of my choice and the bitcoin is sent directly to my wallet. Simple Mining, based in Cedar Falls, Iowa, offers a premium white glove service designed for everyone from individual enthusiasts to large scale miners. They've been in business for three and a half years and currently operate more than 10,000 bitcoin miners based in Iowa. Their electricity is over 65% renewable thanks to the abundance of wind energy. Not only do they simplify mining with their top notch hosting and on site repair services, but they also help you benefit financially by running your operations as a business. This approach offers significant tax advantages and enhances the profitability of your investment. Do you ever worry about the complexities of maintaining your mining equipment? They've got you covered for the first 12 months. All repairs are included at no extra cost. If you experience any downtime, they'll credit you for it. And if your miners aren't profitable at the moment, simply pause them with no penalties when you're ready to upgrade or adjust your setup. Their exclusive marketplace provides a seamless way to resell your equipment. Join me and many satisfied miners who have simplified their Bitcoin mining journey. Visit SimpleMining IO Preston to get started today that's SimpleMining IO Preston to get started today with simple mining, they make it simple.
Sponsor Voice
Imagine scaling your business with technology that understands your customers.
Preston Pysh
Literally.
Sponsor Voice
That's the story Behind Alexa and AWS AI. Every day Alexa processes over 1 billion interactions across 17 languages, all while reducing customer friction by 40%. It's not just about making life easier, it's also about transforming customer engagement and generating new revenue streams. Behind the scenes, AWS AI powers more than 70 specialized models working together to create natural conversations, proving how enterprises can deploy AI at scale with confidence and security. Alexa's AI capabilities were battle tested across Amazon's massive operations and delivering real, measurable impact at scale. These same innovations now give other businesses a proven framework to boost efficiency, unlock new revenue streams and gain a lasting market edge. Discover the Alexa story@aws.com AI R story that's aws.com AI R story the more.
Preston Pysh
Your bitcoin holdings grow, the more complex your challenges become. What started as a simple self custody now involves family legacy planning, sophisticated security decisions and navigating situations where a single mistake could cost generations of wealth. Standard services weren't built for these high stakes realities. That's why long term investors choose Unchained Signature, a premium private client service for serious bitcoin holders who want expert guidance for resilient custody and an enduring partnership. With Signature, you're paired with your own dedicated account manager, someone who understands your goals and helps you every step of the way. You get white glove onboarding, same day emergency support, personalized education, reduced trading fees, and priority access to exclusive events and features. Unchained's collaborative custody model is designed to provide the same security posture as the world's biggest bitcoin custodians, but for those who prefer to hold their own keys. Learn more about unchained signature@ Unchained.com Preston use code PRESTON10 at checkout to get 10% off your first year. Bitcoin isn't just for life, it's for generations. All right, back to the show. My question for you is, and I don't I'm struggling to find a way to frame this. But do you write these thousands of rules and you're also working on a way to compress or make the English language more efficient? What did you fundamentally learn through the experience of writing these thousands of rules and rules of thumb of compression? Because when I think about it like you're doing, like we look at these LLMs and machines are doing all of this really hard and complex work. But I would imagine what you were doing there in the 90s and early 2000s was exactly what all these LLMs are doing today. But you were doing it manually. And so I guess it's almost I hear these people that say, well we have no idea what's behind these ones and zeros, all these LLMs, which I guess is a true statement, right?
Dr. Richard Wallace
Yeah.
Preston Pysh
But if a human was going to maybe be able to understand what it is that it's doing, I think you would be one of the very few people on the planet that could maybe help us understand what that is. Because you did this manually for so many years.
Dr. Richard Wallace
Right. Well, there's so many things wrapped up in that question, so let me see if I can pull it apart. Yeah. So there's always been a kind of tension in the history of artificial intelligence between, let's say, supervised learning and unsupervised learning. So what I was doing was what we call supervised learning because I was playing the role of a teacher or, you know, a guide. So whenever I added a new response, it was manually added, as you're saying, driven by a particular input that I saw in the. In the conversation logs. And so the way that I'm teaching the robot is by acting as its teacher, basically, and saying, you know, when you see this, you should say that.
Preston Pysh
Yeah.
Dr. Richard Wallace
And, you know, that's in contrast to unsupervised learning, which is what these LLMs are doing. They're basically, you know, trying to accumulate a lot of inputs and, you know, find the neural network weights that match it to particular outputs. And so with that technique, you can get phenomenal results, obviously. But as you're saying, it's difficult to know how the LLM came up with particular responses. Whereas in the supervised learning case, where it's all a symbolic process, it's very easy to trace back through the, you know, the logic of the program and see what caused a particular response to be generated. And I always say that people who do supervised learning approaches spend all of their time doing creative writing, which is what I was doing with the ALICE bot. But people who do unsupervised learning spend all of their time deleting crap from the database.
Preston Pysh
Yeah.
Dr. Richard Wallace
And that's sort of what's going on with the LLMs now is, you know, they're having to put a lot of work into filtering to make sure they don't say anything inappropriate or offensive or political. And, you know, that ends up being a lot of manual work as well.
Preston Pysh
Yeah. And I guess my understanding is that everybody that's on the cutting edge of AI today, like that's the holy grail for them, is to get the human out of the loop and for it to be completely AI generated and filtered and just like, there's no humans there. As a person who deeply understands this. And the way that you frame that is this back and forth and there's consequences to one side and the other. Is there a moment where you think that they will be able to get away from complete removing the human out of the loop and it progressing in a way that's actually beneficial? Or do you think that the more that they lean into removing the human out of the loop, that they actually are setting themselves up for a systemic failure because it's going to spiral into this AI slop, if you will, or it's creating and generating content in a direction that's so fast and so extreme that they get away from human filtering altogether, and it just kind of turns into this, almost like a runaway virus, if you will. Is that how you kind of see this, that it needs to be balanced, or is it even possible for it to go in that direction without humans?
Dr. Richard Wallace
Well, it's so hard to predict the future of AI. I would have never expected this whole development to come along in the first place. Yeah, but you know, I always think of a child learning language, and there are big differences here between a child learning language and an LLM. Yeah, you know, a kid doesn't have to scan the whole Internet to learn how to speak a language. In fact, they're pretty good at, you know, what we call one shot learning. You know, if you say to a kid, this is a dog, then they can instantly recognize every dog in the world as a dog. Yeah, but what also comes into play here is the supervised unsupervised learning dichotomy, which is if you are a kid and you have a good teacher and good parents, you'll learn to speak very well. But if you're a kid who has to pick up language on the street without any supervision, then your language learning won't be nearly as good. And so the, the LLM is more like the kid out on the street learning language without any supervision. And that's why they learn so much inappropriate and offensive material and so on.
Preston Pysh
What did the wins teach you back in the day when you were winning this, about how humans judge intelligence?
Dr. Richard Wallace
Well, you know, I can say the same thing about LLMs now that I said about my chatbot back then, which is that people say, well, these chatbots are becoming more and more like humans. And, you know, I have a different opinion about that, which is that what it's really showing us is that people are more like robots than we would like to think we are. Because it's not that the robots becoming more like a human, it's that it's revealing to us how robotic we are. And you know, back in the early days of working on Alice, I came to realize that most people most of the time are saying things that they themselves have said before or that they've heard other people say before. And even when they're writing, they're basically synthesizing thoughts and ideas that are not necessarily original. And all of these chatbots work because language is predictable, and predictable means robotic. So I always say that if we were all William Shakespeare's uttering an original line of poetry with every sentence we spoke, then these chatbots would never work because they're based on language being predictable, not original. Yeah.
Preston Pysh
Is it fair to say that you would suggest that humans judge intelligence by their flow or by this response of like most people are looking at that and they're saying, oh, that's intelligence. But then you're looking at it and you're saying that's not intelligence, it's just repetition. I think that's kind of what you're getting at.
Dr. Richard Wallace
Yeah, Repetition. But robotic, predictable.
Preston Pysh
Yeah. You know, it's interesting, I just read something, it was like last week, and I think Google came out with this many months ago. But for them to do this long term learning where it has much more of a memory, it's highly based on whether something's novel or not relative to its index of everything that it's been trained on. And that's. And when it sees this novel thing that it wasn't predicting or expecting to come next, that it then stores that in its long term memory or. I apologize for the terminology here, Dr. Wallace, but it flags it as something that is worthy of being remembered because it's novel and so different and outside of would have predicted to be the next thing. And it's interesting that it's in keeping with Claude Shannon's information theory and how it's all aligned. I'm curious if you have any opinions on that in particular and whether you think that that has a key component to intelligence or how new things are discovered in knowledge in general.
Dr. Richard Wallace
Well, that really gets to the heart of what I think the difference is between humans and robots, which is that, like I said, I think most people most of the time are acting like robots. They're just acting in kind of a stimulus, response, fashion. Just as an aside, I always used to say that most human conversation is stateless. Meaning that what I'm saying to you right now only depends on the question that you just asked me. And we can forget the whole history of our conversation up to this point. You know, one of the pieces of evidence for that is if you can imagine yourself having a casual conversation with someone at a party, say, and then you say, oh, where did you go to college? And they say, oh, I went to Harvard, I already told you that. You kind of forgot that you had already talked about college earlier in the conversation.
Preston Pysh
Yeah.
Dr. Richard Wallace
And you know, you're just responding to the most recent thing you heard and most recent input. But what really gets to the difference between humans and robots is even though most people most of the time are speaking in this kind of reactive behaviorist way, it is possible for people to have original thoughts and be creative. And you know, it's almost like a muscle that you need to exercise in order to build it up. If you want to break out of that robotic mold, then you have to put some effort into trying to be creative and original with your thoughts and thinking and ideas.
Preston Pysh
Let's take a quick break and hear from today's sponsors.
Sponsor Voice
You know what sets the best businesses apart? It's how they leverage innovation to turn complexity into growth. That's exactly what Amazon Ads is doing. Powered by AWS AI Every day, Amazon Ads processes billions of real time decisions, optimizing ad performance across a $31 billion advertising ecosystem. The result is campaigns that run 30% faster and deliver measurable business impact at scale. And this is how Amazon itself drives growth. Their agentic AI transforms marketing from a resource heavy process into an intelligent autonomous system that maximizes ROI and empowers marketers to focus on creativity and strategy. Amazon Ads is proving that AI driven advertising isn't just the future, it's the new competitive advantage. And better yet, every enterprise can apply the same innovation playbook that Amazon perfected in house. See the Amazon ads story@aws.com AI r story that's aws.com AI r story. Startups move fast and with AI they're shipping even faster and attracting enterprise buyers sooner. But big deals bring even bigger security and compliance requirements. A SOC 2 isn't always enough. The right kind of security can make a deal or break it. But what founder or engineer can afford to take time away from building their company? Vanta's AI and automation make it easy to get big deals ready in days. And Vanta continuously monitors your compliance so future deals are never blocked. Plus, Vanta scales with you backed by support that's there when you need it every step of the way. With AI changing regulations and buyers expectations, Vanta Vanta knows what's needed and when and they've built the fastest, easiest path to help you get there. That's why serious startups get secure early with Vanta our listeners get $1,000 off@vanta.com billionaires that's V A N T A dot com billionaires for $1,000 off. It's the new year, which means that it's the best time to finally start the business you've been dreaming about. Just a couple years ago, I launched my own e commerce business and Shopify was exactly the tool I needed to get started. While many people continually push off their dreams until the next year, I am here to tell you that now is the time to capitalize on the opportunities right in front of you. Shopify gives you everything you need to sell online and in person. Millions of entrepreneurs, including myself, have already made this leap from household names to first time business owners just getting started. Choose from hundreds of beautiful templates that you can customize and use their built in AI tools to write product descriptions or edit product photos. And as you grow, Shopify grows with you every step of the way. In 2026, stop waiting and start selling with Shopify. Sign up for your $1 per month trial and start selling today at shopify.com WSB go to shopify.com WSB that's shopify.com WSB hear your first this new year with Shopify by your side.
Preston Pysh
All right, back to the show. Do you think that the Turing Test actually measures intelligence or is it something else entirely?
Dr. Richard Wallace
I'm so happy you asked me about the Turing Test. So the Turing Test, most people understand the Turing Test as sort of game where there's three players. You have a person who's called the interrogator or the judge, and then they're communicating through a teletype, a text only medium. You know, much like texting on your phone but without any audio, visual, just typing. And then the two entities that the judge is talking to, one is a human and one is a machine. So then the judge has to decide which one is the human and which one is the machine. And if they misidentify the machine as the human, then it's said to pass the Turing test. But you see, this has a big problem as a scientific experiment because it's not really clear how often the interrogator has to, you know, misidentify the human. Is it, is it 50% of the time, 75% of the time, 100% of the time? What does that even mean? Yeah, the robot is more human than a human. So in Turing's 1950 paper, computing machinery and Intelligence, he actually describes two different versions of the test or the game. And earlier in the paper, he described something called the Imitation Game, which, as far as I understand, was based on a real parlor game that people played in Victorian England. And in this game, again, there are three players, the judge or the interrogator, and the other two players are a man and a woman. And let's just set aside the, you know, the gender issues and the context of 19 writing in 1950 here. So there's a man and a woman sequestered away and in the Victorian England case in different rooms. And then the judge is sending them handwritten questions back and forth. And the judge's job is to decide which one is the man and which one is the woman. Now, furthermore, Turing stipulated that the woman should always tell the truth and the man should always lie. So now if you. If you ask the man, are you a woman? He would say, yes, because he has to lie.
Preston Pysh
Okay.
Dr. Richard Wallace
And then, you know, the judge's job is to try to figure out which one is the man and which one is the woman. Now, if you replace the lying man in that scenario with a machine. Okay, let's say you replace the man with a very crude chatbot like Eliza or even Alice.
Preston Pysh
Yeah.
Dr. Richard Wallace
Then the judge could identify the woman correctly 100% of the time. Okay. Because it's clear that only one of the players is a human at all, and that has to be the woman. So now, as a scientific experiment, we can say, let's run this experiment with 100 judges and 100 men and 100 women. I don't know exactly how many are needed for statistical accuracy, but let's just say we did a random sample where we collected the results of this game for a large number of players, then you could measure a certain percentage of the time that the judge would identify the woman correctly. And, you know, let's say that 70% of the time. Now, if you replace the lying man with a computer, and the computer is a very good AI that can actually play the role of the lying man, then you should get closer and closer to that actual 70% measurement. So that's actually a better scientific experiment than the Turing Test.
Preston Pysh
Very interesting.
Dr. Richard Wallace
Yeah. Yeah. The Loebner contest was really based on the. The original standard Turing Test.
Preston Pysh
Turing Test. Okay.
Dr. Richard Wallace
Yeah. And, you know, the rules change from year to year depending on, you know, who is hosting the contest. But Loebner's rule was basically, if 50% of the judges, usually there are four judges so if two out of four judges misidentified the robot as a person, then he would award the silver medal for passing the Turing test. That's so cool. It was never awarded, by the way.
Preston Pysh
It was never awarded. Interesting.
Dr. Richard Wallace
Yeah.
Preston Pysh
If you could get in a time machine right now and go back to your days, call it 2000, when you had done this, what would be the thing that you would whisper to yourself as a hint as to how to improve the chatbot that you had back then?
Dr. Richard Wallace
I would probably tell myself, don't even do this. Why?
Preston Pysh
Because you know how hard it is and how.
Dr. Richard Wallace
Okay.
Preston Pysh
Why?
Dr. Richard Wallace
There was no money to be made from chatbots until very recently.
Preston Pysh
You were very early.
Dr. Richard Wallace
Yeah, the Loewner contest was always the domain of, you know, hobbyists and amateur programmers. There are a few, you know, academic entries, but no. No big companies ever got involved in it. And then in the 2000s, I organized a number of chatbot conferences, you know, international chatbot conferences, and we have a hard time getting 25 people to attend.
Preston Pysh
Oh, really?
Dr. Richard Wallace
Okay. Yeah. So, you know, after many years of really struggling with this and trying to figure out how to make a living with chatbots, and I did co. Found a company called Pandora bots, which is, you know, based on attempting to commercialize the AI Mel bots. But, you know, after a while, in the early. After. In the early teens, I should say, I just decided to get out of the field completely and I went to work in healthcare.
Preston Pysh
Yeah.
Dr. Richard Wallace
But then in the past five or six years, I've gotten back into AI as it's become more, you know, lucrative, I should say.
Preston Pysh
It seems like in 2017, Google came out with this paper. It was called attention is all you need. And this seemed to be a very seminal breakthrough in how to, for all intensive purposes, do what you were doing in a very manual way and let machines do it way faster and with way more horsepower and more data. Right. I'm curious, when this paper came out, did you read it when it first came out and were you kind of aware of this or did it kind of hop on your radar a couple years after when we started seeing the breakthroughs?
Dr. Richard Wallace
Yeah, I was really not paying attention to it at the time. Like I said, it was.
Preston Pysh
Yeah.
Dr. Richard Wallace
Working in healthcare, you know, I don't think the LLM industry really came to my attention until, you know, we started hearing about GPT.
Preston Pysh
Do you think that that paper was kind of like a really important, seminal piece of work for people to kind of understand how to start doing this in a mechanical machine? Kind of way.
Dr. Richard Wallace
Yeah, obviously. Obviously. That was a breakthrough. Yeah.
Preston Pysh
Wow. And so in your own words, what would you say? I mean, we know attention is a big piece of it, but I think for somebody that just kind of hears that label is like, okay, well what does that mean? If you were going to try to explain to somebody in a very simple way, like what is that paper saying that has enabled, you know, machine learning to do what it does?
Dr. Richard Wallace
Well, in a way, I'm reminded of the work we talked about earlier, which was the robot eye in the early 90s, because that was also an attention based mechanism. So I described how in order to make use of that, you know, log map arrangement of pixels where there's high resolution towards the center, you have to be able to point the camera so that the high resolution can be aimed at something interesting. Well, how do you know it's interesting? It's by if you see something in the periphery, for example, movement, and you want to move your eye towards the thing that you're seeing in the periphery and place the attention on that. So attention has to do with focusing your highest resolution sensor sensory capability on whatever seems most interesting in a scene. I think there's an analog for that in the LLM version of attention as well. They're sort of swinging in the direction of where the gaze of the robot is looking, depending on what they see in the periphery.
Preston Pysh
Okay. So this is super. I love this example because it's very physical and you can kind of make sense of it very simply because it's dealing with vision. And so when you're changing your attention and you're able to zoom in, because you have the capacity to zoom in on something, how are you filtering or knowing what's novel in that broader sight picture in order to know, to adjust the focus to that thing? What gives us that capacity to know? Oh, well, I'm looking at you and now I'm focusing on the tree back behind you and I'm zooming in on that and I'm putting my attention there. What would be that insight in order to say, oh, that's different, that's something I need to dial in on or pay more attention to.
Dr. Richard Wallace
Yeah. A long time ago, a guy called Hans Moravec, who's very interesting, we should talk about him some more. He came up with an attention mechanism called an interest operator. And this is for computer vision again.
Preston Pysh
Okay.
Dr. Richard Wallace
And it's basically that things in your visual field that have high variance, you know, a high ratio of dark to light, more interesting than other things. So that would typically be edges, like the edges of the tree you just described, or corners of things, or just any sort of bright spot against a dark background, or vice versa. And then recognizing those in the periphery of your visual field would cause you to move the center of your visual field towards whatever the interest operator is highlighting.
Preston Pysh
Fascinating. Okay, here's an odd question for you. Do you think your real subject of study ended up being humans rather than machines?
Dr. Richard Wallace
Oh, well, you know, I'm a computer programmer, so I was always more interested in the machine side of it. I think I did learn a lot about human conversation from monitoring those conversation logs.
Preston Pysh
The reason I asked this question is, you know, in kind of research and preparation for the interview, it seemed to me that you have this opinion, I suspect, and correct me if I'm saying any of this wrong, but it seems like you were not convinced that any of these chatbots were actually saying anything intelligent. It was just. It was this canned response that was coming back. And then the reaction that humans had was like, wow, this thing is real and there's like, something behind it. And so I guess that's the impetus for the question is because you were, I suspect you were fascinated at the response of people and how duped I guess they were by interacting with some of these chat bots. So I guess that's more of the impetus to the question. And would you agree with everything that I just said?
Dr. Richard Wallace
Well, I used to categorize the users or the clients, I call them, into three categories, A, B and C. Okay, so. And A, clients are abusive. Okay, so they're going to say, how can I put this? You know, very inappropriate things to the chatbot. And you see those in the conversation locks. Although you always have to wonder if someone is saying, you know, I hate you or I love you even. Is that what they're. What they really have in mind? Or are they just, you know, trying to get a response out of the robot and see.
Preston Pysh
Testing the limits.
Dr. Richard Wallace
Yeah, testing the limits, yeah. And then the next category B are just average users. So those category B people were the ones who could suspend their disbelief and they would be very engaged with the bot and have, you know, very long conversations, come back and continue their conversations and so on. And so that would be the group that, you know, as you're saying, would be kind of reading more into the bot that was than was actually there because they're engaged with it on an emotional level. And then the last category, I call the critics, which are people who know something about computer programming and AI, and they just Think this thing is terrible and, you know, they walk away after a few interactions.
Preston Pysh
Yeah. Well, I'm curious to hear your thoughts on where we're at now and where you see some of this going next. You know, you have some really smart people in this space that have, you know, demonstrated their knowledge through the things that they've built. And I think, you know, if, if we back up the tape three years ago, many of them were very suspect as to whether AGI could ever be possible today. And I have a hard time knowing if this is them trying to get more capital or they actually believe that we're on the cusp of AGI. I don't know which one of those two it is, but I'm just curious to hear your general thoughts on where you see us today and like what the next five years might bring. As exciting of a next five years as we've seen in the past five years. Kind of just give us your one over the world on it.
Dr. Richard Wallace
Well, I definitely think it'll be exciting. You know, the term AGI seems a little strange to me because it's what we've always called AI. You know, AI has always been a goal that's just out of reach. And you know, we have an imagination of what it is based on seeing science fiction movies and that sort of thing. You know, HAL and R2D2 and all those examples give us a template for what we'd like to see in an AI. And so things kind of odd that they've come up with a new term AGI, to kind of move the goalpost even further. But I'm very skeptical about that. You know, a very simple answer to this question, which a lot of people I know would not agree with, is that God gave human beings a soul, but machines don't get a soul. So, you know, in the sense that human beings have freedom of thought and self reflection and creativity, I don't think those things will be reproduced in a computer anytime soon.
Preston Pysh
Yeah, and I think I'm with you 100% on what you just said. And I know there's a lot of people that want to argue these ideas and we're not here to do that. But I'm with you 100. I think that there is something very special and unique about just any living being, not just humans. I think any living being has this special connection from, you know, a higher source. And I don't think that we're necessarily going to see, you know, these humanoid robots have whatever that is. And I have no idea how to define that. But I do think that some of these humanoid robots, call it five or ten years from now, are going to do things. And it goes back to some of your earlier comments about these chat bots and how people were just like, oh my God, I feel like I'm talking to a real person. This feels real. And I think that some of these humanoid robots are going to feel like real humans to a lot of people. But that doesn't mean that it's the same thing as us. I think we are something very hard to define, very different. But oh my goodness, Richard, I really enjoyed this conversation. Anything else that you want that you think is super important on this particular topic that you see right now or kind of going into the future that you think is worthy of highlighting or that the audience should know?
Dr. Richard Wallace
Yeah. Well, the company I work for right now, Franz, okay, it's actually a very old AI company founded in 1985. And Franz started out as a company selling Lisp compilers. But then, you know, by the end of the 1990s, very few people were paying money for software, you know, because there's so much free language software available. So they pivoted to graph database technology. Okay. And, you know, without getting into too much detail about what that is, now that we have the LLMs, we are taking an approach called neurosymbolic computation. So we're in the history of AI talked about supervised versus unsupervised learning. Another dichotomy in AI is between symbolic and neural approaches. So symbolic approaches are things like, you know, theorem proving programs or the early chatbots that we were talking about based on rules, where basically you're manipulating symbols, or you can also think of a chess playing program, you know, which is very mechanical, and manipulating symbols and searching through the space of moves. And so the symbolic approach is in contrast to this neural learning approach. And now we're basically trying to find the best of both worlds. So one example of that is in the medical field, you can make predictions about how likely someone is to be, well, their mortality, how likely they're going to be readmitted to the hospital after being discharged, you know, within 30 days, how likely are they to be readmitted, or how likely they are to have a stroke and the various other things. But the medical field has developed these symbolic techniques for making those predictions. And so in the case of stroke from afib, there's a test called Chad vasc and it basically takes into account criteria like, you know, your age and gender, whether you've had congestive heart Failure, history of hypertension, and various other factors like that. And when you plug in those values, it produces a number which can then be used to estimate the likelihood of you having a stroke. And now you could also do that with a neural network, a recursive neural network, where you basically train it by feeding in the patient data, the diagnostic data, and their medical history, and then just look at whether they had a stroke or not. So you can train this neural network to take a new patient data and give some prediction about whether they're going to have a stroke. Then the third way of doing that is to use an LLM. You can just simply upload the entire patient chart to the LLM and say, how likely is this person to have a stroke? And so what we've been doing is sort of combining those three approaches together. You know, we've got the symbolic estimate, we've got the neural estimate, and we've got the LLM estimate. You know, you could potentially display all three of those, and then it's up to the clinician to make a judgment. Or you could even put them all back into a different LLM and ask the alarm, which one of these measurements is best? Which one of these predictions is best? Wow. So it's an effort to combine the best of the symbolic approaches with these newer neural approaches.
Preston Pysh
Wow. Say the name of the company one more time. I want to make sure I have the name of it in the show notes for people if they want to check it out.
Dr. Richard Wallace
Franc.
Preston Pysh
All right. Well, I am just so thrilled to be able to talk to somebody who's been in this space for decades. It's miraculous to see what's happening, and I can only imagine where we're going to be in five years for now. But Dr. Richard Wallace, thank you so much for making time and coming on the show and imparting all of this knowledge that you have. We really appreciate it.
Dr. Richard Wallace
Well, I'm glad people want to talk to me about it after a long time of people not being very interested.
Preston Pysh
Well, there's a lot of people interested now, let me tell you so. But thank you again for making time.
Dr. Richard Wallace
And coming on the show. My pleasure. It was great talking with you as well.
Podcast Announcer
Thanks for listening to tip. Follow Infinite Tech on your favorite podcast app and visit theinvestorspodcast.com for show notes and educational resources. This podcast is for informational and entertainment purposes only and does not provide financial, investment, tax, or legal advice. The content is impersonal and does not consider your objectives, financial situation, or needs. Investing involves risk including possible loss of principle and past performance is not a guarantee of future results. Listeners should do their own research and consult a qualified professional before making any financial decisions. Nothing on this show is a recommendation or solicitation to buy or sell any security or other financial product. Hosts, guests and the Investors Podcast Network may hold positions in securities discussed and may change those positions at any time without notice. References to any third party products, services or advertisers do not constitute endorsements and the Investors Podcast Network is not responsible for any claims made by them. Copyright by the Investors Podcast Network. All rights reserved.
This episode dives deep into the origins and evolution of conversational AI, exploring the early motivations behind chatbot development, the technical and philosophical distinctions between early rule-based systems and modern neural networks, the complexities behind the Turing Test, and the interplay between human and machine intelligence. Dr. Richard Wallace shares first-hand accounts from decades in the field, revealing both the humility and profound insight that comes from seeing AI evolve from the fringes to the center of public conversation.
[02:39 – 08:50]
"The inventor, Joseph Weizenbaum, ended up pulling the plug on [ELIZA] because he thought it was too dangerous. He thought that people were reading too much into it than was actually there." – Dr. Richard Wallace [05:04]
[08:50 – 13:45]
[11:31 – 13:45]
"In AIML...the category consists of a pattern that matches some input, some natural language input, and then a response called the template." – Dr. Richard Wallace [12:13]
[14:07 – 21:54]
"People who do supervised learning approaches spend all of their time doing creative writing, which is what I was doing with the ALICE bot. But people who do unsupervised learning spend all of their time deleting crap from the database." – Dr. Richard Wallace [21:54]
[22:10 – 28:35]
"A kid doesn't have to scan the whole Internet to learn how to speak a language. In fact, they're pretty good at...one-shot learning." – Dr. Richard Wallace [23:19]
"People say, well, these chatbots are becoming more and more like humans. What it's really showing us is that people are more like robots than we would like to think we are." – Dr. Richard Wallace [24:31]
[32:13 – 36:22]
"It's not really clear how often the interrogator has to...misidentify the human. Is it, is it 50% of the time, 75% of the time, 100% of the time?" – Dr. Richard Wallace [32:13]
[36:37 – 38:35]
"I would probably tell myself, don't even do this. There was no money to be made from chatbots until very recently." – Dr. Richard Wallace [36:37]
[38:35 – 41:15]
"Attention has to do with focusing your highest resolution sensory capability on whatever seems most interesting in a scene." – Dr. Richard Wallace [39:25]
[42:10 – 44:31]
"Category B people were the ones who could suspend their disbelief...engaged with it on an emotional level." – Dr. Richard Wallace [43:48]
[45:23 – 47:36]
"A very simple answer to this question...is that God gave human beings a soul, but machines don't get a soul..." – Dr. Richard Wallace [45:23]
[47:36 – 51:05]
"Now that we have the LLMs, we are taking an approach called neurosymbolic computation...combining the best of the symbolic approaches with these newer neural approaches." – Dr. Richard Wallace [47:36]
On Chatbots Revealing Human Nature:
"What it's really showing us is that people are more like robots than we would like to think we are." – Dr. Richard Wallace [24:31]
On Building Chatbots Before the Mainstream:
"There was no money to be made from chatbots until very recently...I just decided to get out of the field completely and I went to work in healthcare." – Dr. Richard Wallace [36:37]
On the Evolution of the Turing Test:
"As a scientific experiment, it's not really clear how often the interrogator has to...misidentify the human. Is it, is it 50% of the time, 75% of the time, 100% of the time?" – Dr. Richard Wallace [32:13]
On the Limits of Current AI:
"God gave human beings a soul, but machines don't get a soul." – Dr. Richard Wallace [45:23]
On Human Learning vs. LLMs:
"A kid doesn't have to scan the whole Internet to learn how to speak a language. In fact, they're pretty good at, you know, what we call one-shot learning." – Dr. Richard Wallace [23:19]
If you want to learn more about Dr. Richard Wallace's current work, check out his company, Franz.
For further resources, transcripts, and more episodes, visit theinvestorspodcast.com.