Transcript
A (0:00)
Welcome to the podcast. I'm your host, Jayden Schaefer. Today on the show, I wanted to go back in time a little bit and actually talk about the history of AI. Typically, I'm talking about news and AI or interviewing people that are working on, you know, some of the biggest AI companies. But I wanted to talk a little bit about the history because I've been researching it lately, and personally, for me, it is definitely not boring. There's just so many wild twists in this, and I think, you know, if this is an area that we all spend so much time focusing on, there's so much money in the world being poured into it. I want to go back and talk a little bit about. About some of the. Some of the background that's been, you know, that basically laid the foundation for what we have in AI today. So before we get into all of that, you probably pay for multiple subscriptions to get access to all of the best AI tools. I know it can definitely add up fast. I had the same problem, and so I actually built AI box AI, and so you can spend $20 a month and you get over 40, actually. I believe now we're up to 50 of the top AI models on one platform. So you get text, image, audio, everything you need in one place. You don't have to juggle through tabs. You don't have to waste money on a whole bunch of overlapping descriptions. If you want to check it out, there's a link in the description to AI Box AI. Okay, let's get into the podcast today. So I think the idea of artificial intelligence actually starts way earlier than a lot of people think. So it's actually before computers were very powerful at all. So people are already kind of asking the question, can machines think? And if you go back to the 1940s and 1950s, computers were, you know, they're basically just glorified calculators. I mean, we've all seen the pictures of these computers that are, you know, the size of a room when they got more advanced. But before that, they were size of the house. And before that, it was like, basically the size of, like, a warehouse, right, for one single computer. And so even back then, there was a whole bunch of these kind of visionary thinkers that believed that these machines could eventually reason or learn or maybe even mimic human intelligence. And, of course, there's, like, there's a lot of funny twists in. In. In all of this we'll get into. But I think one of the earliest turning points was the idea that thinking Excel itself could have Just basically be reduced to kind of like math and logic. So if human reasoning followed rules, then kind theory was that you could encode those rules into a machine. And that was basically the foundational belief of the, of like early AI. And so in 1956 this officially got a name. There was a group of researchers that were gathered for a workshop and they coined this artificial intelligence. And that's basically the moment that most people consider to be kind of like the birth of the field of AI that we have today. So this early AI obviously was, you know, what they thought it could do was extremely optimistic. I think it was wildly optimistic. So basically these researchers believed that non human level intelligence was maybe 20 years away. They thought things like vision, language, reasoning were basically solved problems. And of course I think the spoiler alert is that they were not because we're here, you know, like over 50 years later and bringing a lot of this stuff out. I mean 75 years later for, for some of this stuff. So a lot of these early AI systems were what we now call symbolic AI. Basically the systems worked by these hand coded rules. So if this happens then that, right, it's kind of the if then you see this pattern and you know, if the computer sees a pattern, it's going to respond in a specific way. And in that like really narrow domain this actually worked. You could build programs, you know, that played chess or that solved logic puzzles or you know, things that did basic math proofs. But the second that you took them outside of these kind of, you know, really small controlled environments, everything broke right? This is not actual intelligence. It's, I mean we know what these are, it's just kind of computer systems. But at the time they believed they truly had achieved, you know, artificial intelligence. So of course we know that the real world is very messy, language is ambiguous, vision is very noisy. As humans we are relying on like intuition and our experience and also like on context. So there's a lot of things that aren't just rules. Is it not just this math that you can kind of have a computer solve? And so no matter how many rules you write, you never can actually capture everything that happens in reality. And so this led to one of the first big AI disappointments, you could say. And because of this, a lot of funding to the program dried up a lot of expectations. People just, you know, like kind of basically they collapsed. And this kind of was known as by a lot of researchers as AI Winter. So governments, universities, basically all just like, yeah, well this isn't really AI. It's not really working. We'll continue developing computers. We're not really focusing on that specific direction. So that kind of froze for a while. But then in the 1980s, AI came back in a bit of a new form, and that was these expert systems. So they were essentially programs designed to replicate decision making of human experts. So doctors, engineers, chemists, people with very specialized knowledge, and companies poured a ton of money into those systems because, again, you know, in really narrow domains, they actually worked quite well. So you could encode expert knowledge, you could get really interesting outputs. The problem was that they were very brittle systems. They're also incredibly expensive to build. And I don't think enough people talk about that. They're really expensive to maintain. And then, of course, they don't scale. Right. Every time the world changed, you had to update the rules manually. And then once, once you know, that happens and if it breaks, then of course the hype is kind of ahead of the reality. And so everyone gets disappointed, and then you get another AI winter, right? Because these. These tools worked for, like, a moment, and as things change in the world, they stop working. So this is where I think it kind of gets a little bit interesting for AI. The whole field took an interesting turn. So symbolic AI was definitely struggling. It was a. Definitely a totally different approach than what we needed to do, because what we needed to do was machine learning. So instead of telling a computer exactly what to do, what you let it learn from data. And the idea was inspired by human brain neurons, the connections, and then kind of learning from experience. So early kind of versions of neural networks existed, like, all the way as far back as the 1950s, but they were super, super limited. Computers were very slow data. Of course, there's not a lot of data on this, and the math was very hard. So for many decades, these kind of neural networks were basically ignored. But that all stopped after three main things happened. So first, of course, data exploded. You have the Internet, you have smartphones, you have social media. So so much data is being created, and suddenly we have, like, all of this data, specifically about, like, languages and images and behavior and, like, everything. So all of this data. And then second, compute got super, super cheap and also powerful. So the GPUs that were, you know, originally built for gaming, they turned out to be really perfect for training neural networks. And I mean, I would even say go so far as to say, like, a lot of the hardware that was built for crypto mining. And then when the crypto winter came, that just kind of perfectly pivoted into AI and we had like all of this infrastructure built out that had we not been through that, we, we wouldn't have been able to kind of uptick training AI models as fast as we did. So those that all helped. And I think the last thing that really helped was that researchers figured out some better techniques for training deep neural networks. And this is like, this is kind of where this deep learning comes in. It's basically the idea that you stack a whole bunch of layers of neural networks to learn harder and more complex patterns. And basically by. By kind of adding all of that, the data, the compute, and that new strategy, everything changed. So in the early 2010s, deep learning started to crush a lot of benchmarks. It's also hilarious to talk about crushing benchmarks in 2010 because it's definitely different than what we have today. But you had like image recognition that all of a sudden it actually worked. You had speak recognition that got really good translations, went from being super terrible to usable. I mean, I even remember early days of Google Translate, you know, everyone make fun of. As time went on, it became really, really good. So because of this, a lot of companies realized, look, this is actually scaling. And so instead of just, you know, writing rules, you just give models more, like these massive data sets and you're going to let them learn. And so basically, the more data you give them, the better they got. The more compute you give them, the smarter they become. And so I think when we kind of realized that this kicked off basically what's known as the modern AI boom. From there, everything got, you know, accelerated much faster. Models got way bigger. We realized we needed to have much bigger models. The data sets, we realized, had to get much larger. And then training runs went from, you know, like, it used to be like hours to weeks, and then it started getting pushed into months. And eventually we've arrived at a lot of these large language models and we have the kind of AI that can read, write, reason and talk. I think what's important to, like, understand with all of this is that obviously modern AI, like, this isn't magic. These models don't think like humans. They don't have consciousness. Right? They don't have beliefs or desires. Despite what everyone's going to tell you over on X about the clock, odd bot or whatever, making its own social media network and overthrowing the humans and all that kind of stuff, really what they have is this kind of statistical understanding of patterns in data. And of course, just this absolutely massive scale. I think one thing that's important to remember is that intelligence Itself is, you know, maybe the most, you know, like, pattern recognition kind of prediction thing there is. And once you scale that far enough, you start getting behavior that looks a lot like reasoning, but it's still just pattern recognition and prediction. I think that's why the last few years feel a lot different. This isn't just, you know, it doesn't feel like we have this kind of, like, hype cycle based on a bunch of like, oh, my gosh, we're so close to X, Y and Z, and AI being able to do X, Y and Z. Like, we're seeing these systems actually work. We're seeing them generate actual real economic value. They're actually transforming how, you know, I work. They're transforming how people code, how people write, research, design, build. Businesses like, these AIs are actually helping us. And so I think we've got past a lot of the earlier hype now. Of course, there's still plenty of hype today, and people are overhyping many of the capabilities. But, I mean, you just have to look at how fast we've already progressed. I think from my perspective, this is just the beginning of what these are going to be able to do, obviously, because we're seeing as you scale, compute, as you scale data, they get smarter. So I don't think we've hit a wall on where we go with those. I think we're still super, super early models are getting cheaper, faster, more capable. You know, you can think of this in, like, a way. You have, like, OpenAI, who spends billions of dollars to train models today, some point in the near future, those same models are going to be trained at a fraction of the cost, and anyone will be able to, you know, theoretically train those types of models. And I think that's. That's kind of a future where we move towards. I think the tools are becoming a lot more accessible. You don't need a PhD or kind of this massive budget anymore. Solo founders can totally build products that used to require entire teams. And so that's why I'm super optimistic about AI. And I think every kind of technological shift in history, whether that's electricity or the Internet or smartphones, like, all of them followed the same pattern, which was kind of this early hype. Then you had a big moment of disappointment. Progress was pretty slow. And then all of a sudden, everything kind of clicks. I think that's where we are now with AI. The history of AI. Obviously, to me, when you look at technology, it feels like a real lesson of, like, patience. It took many decades of all of these different ideas failing before we were able to be successful. And you had, like, the hardware that was underpowered, you had a lot of unreal, unrealistic expectations that we had to get there. But the payoff right now is like, massive. Like, we're seeing this really, really help a lot of people in how they do work. And so I think we're getting to a world where intelligence is becoming more of a commodity, where intelligence is going to get a lot cheaper and abundant. This AI that we use, it's. It's going to get a lot cheaper. The upside is definitely for builders, for people that are trying to, these early adopters, people that are trying to work and build and create things. And so I think when people say that, you know, like, oh, oh my gosh, I came out of nowhere, I don't think that's true. I think it's definitely been a very long road, since the 40s and 50s, but now that's, you know, now that all this AI is here, I don't think it's going away. So in my opinion, we're heading into one of the most exciting periods of innovation that we've ever seen. And so I'm super excited to kind of go on this journey. But thanks for tuning into the podcast. This is a ton of fun for me to research and look back on where we've come and where we were going to be going in the future with AI. If you enjoyed the episode, make sure to leave a rating or review wherever you get your podcast. And as always, make sure to go check out AI Box AI, My own startup, where I let you access all of the top 50 AI models in one place for 20 bucks a month. And we have a ton of cool new features that we add all the time, including a no code AI app builder that you can describe an app you want to make and it creates it for you. Links together different AI models. You can go check all that out linked in the description at AI Box AI. I'll catch you in the next episode.
