Steve Gibson (96:44)
Yeah, I do have some feelings, so. Okay. I should note that I already have everything I need with thanks to today's chat GPT4. Oh, and it has changed my life for. For the better. I've been using it increasingly as a time saver in sort of in the form of a programming language, super search engine, and even a syntax checker. I've used it sort of as a crutch when I need to quickly write some throwaway code in a language like PHP where I do not have expertise, but I want to get something done quickly. I just, you know, I'd like, you know, get. Solve a quick problem, you know, parse a text file in a certain way into a different format, that sort of thing. In the past I would take, you know, if it was a somewhat bigger project than that, an hour or two putting queries into Google, following links to Programmers Corner or Stack Overflow or other similar sites, and I would piece together the language construction that I needed from other similar bits of code that I would find online or if I was unable to find anything useful like, you know, solve the problem, I would then dig deeper in through the languages, actual reference texts, to find the usage and the syntax that I needed and then build up from that, you know, because, you know, after you programmed a bunch of languages, they're all sort of the same largely. I mean, Lisp is a different animal entirely, as is apl. But, but you know, the procedural languages, it's just a matter of like, okay, what do I use for inequality? What do I use for, you know, how exactly are the looping constructs built? That kind of thing. That's no longer what I do because I now have access to a what I consider a super programming language search engine. Now I ask the experimental coding version of ChatGPT for whatever it is I need. I don't ask it to provide the complete program since that's really not what I want. You know, I love coding in any language because I love puzzles and puzzles are language agnostic, but I do not equally know the details of every other language. There's nothing ChatGPT can tell me about programming assembly language that I have not already known for decades. But if I want to write a quick throwaway utility program like in Visual Basic Net, a language that I've spent very little time with, and because I like to write an assembly language But I need to, for example, quickly implement an associative array, as I did last week, rather than poking around the Internet or scanning through the Visual Basic syntax to find what I'm looking for. I'll now just pose the question to ChatGPT. I'll ask it very specifically and carefully for what I want, and in about two seconds I'll get what I may have previously spent 30 to 60 minutes sussing out online. It has transformed my work path for those sorts of. For that class of problem that I've traditionally had. It's useful whenever I need some details. Where I do not have expertise is, I think, the way I would put it. And I've seen plenty of criticism levied by other programmers of the code Produced by Today's AI. To me, it seems misplaced, I.e. their criticism seems misplaced and maybe just a bit nervous. And maybe they're also asking the wrong question. I don't ask ChatGPT for a finished product because I know exactly what I want and I'm not even sure I could specify the finished product in words or that that's what it's really good for. So I ask it just for specific bits and pieces and I have to report that the results have been fantastic. I mean, it is literally, it's the way I will now code languages I don't know, I think is probably the best way to put it. It's ingested the Internet and you know, obviously we have to use the term it knowing them very advisedly. It doesn't know them. But whatever it is, I am able to like ask it a question and I actually get like really good answers to tight problem domain questions. Okay, but what I want to explore today is what lies beyond what we have today, what the challenges are and what predictions are being made about how and when we may get more, whatever that more is. You know, the, the, the there where we want to get is generically known as artificial General intelligence, which is abbreviated AGI. Okay, so let's start by looking at how Wikipedia defines this goal. Wikipedia says artificial General intelligence is a type of artificial intelligence that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence ASI, on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI. They say creating AGI is a primary goal of AI research and of companies such as OpenAI and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries. The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2023, some argue that it may be possible in years or decades, others maintain it might take a century or longer, and a minority believe it may never be achieved. Notable AI researcher Geoffrey Hinton has expressed concerns about the rapid progress toward AGI, suggesting it could be achieved sooner than many expect. There's debate on the exact definition of AGI and regarding whether modern large language models such as GPT4 are early forms of AGI. Contention exists over whether AGI represents an existential risk. Many experts on AI have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be too remote to present such a risk. AGI is also known as strong AI, full AI, human level AI, or general intelligent action. However, some academic sources reserve the term strong AI for computer programs that experience sentience or consciousness. In contrast, weak AI or narrow AI is able to solve one specific problem but lacks general cognitive abilities. Some academic sources use weak AI as the term to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans. Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence is a hypothetical type of AGI that is much more generally intelligent than humans, while the notion of transformative AI relates to AI having a large impact on society, thus transforming it. For example, similar to the agricultural or industrial revolutions, a framework for classifying AGI levels was proposed in 2023 with Google DeepMind researchers or by Google DeepMind researchers. They define five levels of AGI emerging, competent, expert, virtuoso, and superhuman. They define, for example, a competent AGI is defined as an AGI that outperforms 50% of skilled adults in a wide range of non physical tasks and a superhuman AGI. In other words, an artificial superintelligence is similarly defined, but with a threshold of 100%. They consider large language models like ChatGPT or Lama2 to be instances of the first level emerging AGI. Okay, so we're getting some useful language and terminology for talking about these things. The article that caught my eye last week as we were celebrating the thousandth episode of this podcast was posted on perplexity AI titled Altman predicts AGI by 2025. The perplexity piece turned out not to have much meat, but it did offer the kernel of some interesting thoughts and some additional terminology and talking points, so I still want to share it. Perplexity, wrote OpenAI CEO Sam Altman, has stirred the tech community with his prediction that artificial general intelligence AGI could be realized by 2025, a timeline that contrasts sharply with many experts who foresee AGI's arrival much later. Despite skepticism, Altman asserts that OpenAI is on track to achieve this ambitious goal, emphasizing ongoing achievements and substantial funding, while also suggesting that the initial societal impact of AGI might be minimal. In a Y Combinator interview, Altman expressed excitement about the potential developments in AGI for the coming year. However, he also made a surprising claim that the advent of AGI would have surprisingly little impact on society, at least initially. This statement has sparked debate among AI experts and enthusiasts, given the potentially transformative nature of AGI. And Altman's optimistic timeline stands in stark contrast to many other experts in the field who typically project AGI development to occur much later, around 2050. Despite the skepticism, Altman maintains that OpenAI is actively pursuing this ambitious goal, even suggesting that it might be possible to achieve AGI with current hardware. This confidence, coupled with OpenAI's recent 6.6 billion funding round and its market valuation exceeding $157 billion, underscores the company's commitment to pushing the boundaries of AI technology. Achieving artificial general Intelligence faces several significant technical challenges that extend beyond current AI capabilities. So here we have four bullet points that outline what AGI needs that there's no sign of today. First, common sense reasoning. AGI systems must develop intuitive understanding of the world, including implicit knowledge and unspoken rules to navigate complex social situations and make everyday judgments. Two, context awareness AGI needs to dynamically adjust behavior and interpretations based on situational factors, environment, and prior experiences. Third, handling uncertainty. AGI must interpret incomplete or ambiguous data, draw inferences from limited information, and make sound decisions in the face of the unknown. And fourth, continual learning. Developing AGI systems that can update their knowledge and capabilities over time without losing previously acquired skills remains a significant challenge. So one thing that occurs to me as I read those four points reasoning, contextual awareness, uncertainty, and learning is that none of the AIs I've ever interacted with has ever asked for any clarification about what I'm asking that's not something that appears to be wired into the current generation of AI. I'm sure it could be simulated if it would further raise the stock price of the company doing it, but it wouldn't really matter, right? Because it would be a faked question like that very old ELIZA pseudo therapist program from the 70s. You know, you would type into it, I'm feeling sort of cranky today, and it would reply, why do you think you're feeling sort of cranky today? You know, it wasn't really asking a question. It was just programmed to seem like it was, you know, understanding what we were typing in. The point I hope to make is that there's a hollowness to today's AI. You know, it's truly an amazing search engine technology, but it doesn't seem to be much more than that to me. There's no, there's no presence or understanding behind its answers. The Perplexity article continues, saying overcoming these hurdles requires advancements in areas such as neural network architectures, reinforcement learning, and transfer learning. Additionally, AGI development demands substantial computational resources and interdisciplinary collaboration among experts in computer science, neuroscience, and cognitive psychology. While some AI leaders like Sam Altman predict AGI by 2025, many experts remain skeptical of such an accelerated timeline. A 2022 survey of 352 AI experts found that the median estimate for AGI development was around 2060, also known as security now episode 2860 90% of the 352 experts surveyed expect to see AGI within 100 years. 90% expect it so not to take longer than 100 years, but the median is by 2060, so not next year, as Sam suggests, they wrote. This more conservative outlet stems from several key challenges. First, the missing ingredient problem. Some researchers argue that current AI systems, while impressive, lack fundamental components necessary for general intelligence. Statistical learning alone may not be sufficient to achieve AGI. Again, the missing ingredient problem. I think that sounds exactly right. Also, training limitations. Creating virtual environments complex enough to train an AGI system to navigate the real world, including human deception, presents significant hurdles. And third, scaling challenges. Despite advancements in large language models, some reports suggest diminishing returns in improvement rates between generations. These factors contribute to a more cautious view among many AI researchers who believe AGI development will likely take decades rather than years to achieve. OpenAI has recently achieved significant milestones in both technological advancement and financial growth. The company successfully closed and here they're saying again, a massive 6.6 billion funding round valuing at $157 billion. But, you know, who cares? That's just, you know, Sam is a good salesman, they said. This round attracted investments from major players like Microsoft, Nvidia and SoftBank, highlighting the tech industry's confidence in OpenAI's potential. The company's flagship product, ChatGPT, has seen exponential growth, now boasting over 250 million weekly active users, and you can count me among them. OpenAI has also made substantial inroads into the corporate sector, with 92% of Fortune 500 companies reportedly using its technologies. Despite these successes, OpenAI faces challenges, including high operational costs and the need for extensive computing power. The company is projected to incur losses of about $5 billion this year, primarily due to the expenses associated with training and operating its large language models. So when I was thinking about this idea of we're just going to throw all this money at it and it's going to solve the problem and, oh, look, you know, the solution is going to be next year, the analogy that hit me was curing cancer, because there sort of is an example of, you know, oh, look, we just, we had a breakthrough and this is going to, you know, cure cancer. It's like, no, we don't really understand enough yet about human biology to say that we're going to do that. And I know that the current administration has been these cancer moonshots. And it's like, okay, have you actually talked to any biologists about this? Or do you just think that you can pour money on it and it's going to do the job? So that's not always the case. So to me, this notion of the missing ingredient is the most salient of all of this. Is like, what we may have today has become very good at doing what it does, but it may not be extendable. It may never be what we need for AGI. But I think that what I've shared so far gives a bit of calibration about where we are and what the goals of agr, of AGI are. I found a piece also in Information Week where the author did a bunch of interviewing and quoting of people that I just. I want to share. Just to finish this topic off. It was titled Artificial general intelligence in 2025. Good luck with that. And it had the teaser. AI experts have said it would likely be 2050 before AGI hits the market. OpenAI CEO Sam Altman says 2025. But it's a very difficult problem to solve. So they wrote. A few years ago, AI experts were predicting that artificial general intelligence would become a reality by 2050. OpenAI has been pushing the art of the possible along with big tech. But despite Sam Altman's estimate of 2025, realizing AGI is unlikely soon, HP Neuquist, author of the Brain Makers and executive director of the Relayer Group, a consulting firm that tracks the development of practical AI, said, We can't presume that we're close to AGI because we really don't understand current AI, which is a far cry from the dreamed of AGI. We don't know how current AIs arrive at their conclusions, nor can current AIs even explain to us the processes by which that happens. That's a huge gap that needs to be closed before we can start creating an AI that can do what every human can do. And a hallmark of human thinking which AGI will attempt to replicate is being able to explain the rationale for coming up with a solution to a problem or an answer to a question. We're still trying to keep existing large language models from hallucinating, unquote. And I'll just interrupt to say that I think this is the crucial point. Either you know, or rather earlier I described ChatGPT as being a really amazingly powerful Internet search engine. Partly that's because that's what I've been using it to replicate for my own needs. As I said, it's been a miraculous replacement for a bunch of searching I would otherwise need to do myself. My point is this entire current large language model approach may never be more than that. This could be a dead end, you know, if so, it's a super useful dead end, but it might not be the road to AGI at all. It might never amount to being more than a super spiffy search engine, the Info week article continues. OpenAI is currently alpha testing Advanced Voice mode, which is designed to sound human, such as pausing occasionally when one speaks to draw a breath. It can also detect emotion and nonverbal clues. This advancement will help AI seem more human, like which is important, but there's more work to do and frankly, that's where we begin to get into the category of parlor tricks, in my opinion, like, you know, making it seem like more than it is, but it still isn't. Edward Tehan, CEO of Zero GPT, which detects generative AIs use in text, also believes the realization of AGI will take time. In an email interview with the article's author, Edwards said, quote, the idea behind artificial general intelligence is creating the most human like AI possible, a type of AI that can teach itself and essentially operate in an autonomous manner. So one of the most obvious challenges is creating AI in a way that allows the developers to be able to take their hands off eventually, as the goal is for it to operate on its own technology, no matter how advanced, cannot be human. So the challenge is trying to develop it to be as human as possible. That also leads to ethical dilemmas regarding oversight. There are certainly a lot of people out there who are concerned about AI having too much autonomy and control, and those concerns are valid. How the developers make AGI while also being able to limit its abilities when necessary. Because of all these questions and our limited capabilities and regulations at the present, I do not believe that 2025 is realistic. Current AI, which is artificial narrow intelligence, performs a specific task well, but it cannot generalize that knowledge to suit a different use case. Max Lee, the CEO of the decentralized AI data provider URT and an adjunct associate professor in the Department of Electrical Engineering at Columbia University, said quote Given how long it took to build current AI models which suffer from incessant sorry, from inconsistent outputs, flawed data sources and unexplainable biases, it would likely make sense to perfect what already exists rather than start working on even more complex models in academia. For many, for many components of AGI, we do not even know why it works, nor why it does not work. To achieve AGI, a system needs to do more than just produce outputs and encourage I'm sorry, and engage in conversation, which means that LLMs alone won't be enough. Alex James, chief AI officer at the AI company Data Miner, said in an email interview, quote it should also be able to continuously learn, forget make judgments that consider others, including the environment in which the judgments are made, and a lot more for that. From that perspective, we're still very far. It's hard to imagine AGI that doesn't include social intelligence, and current AI systems don't have any social capabilities, such as understanding how their behavior impacts others, cultural and social norms, et cetera. Sergei Kasovich, the deputy CTO at the gambling software company SoftSwiss, said, To get to AGI, we need advanced learning algorithms that can generalize and learn autonomously, integrated systems that combine various AI disciplines, massive computational power, diverse data, and a lot of interdisciplinary collaboration. For example, current AI models like those used in autonomous vehicles require enormous data sets and computational power just to handle driving in specific conditions, let alone achieve general intelligence. LLMs are based on complex transformer models. While they are incredibly powerful and even have some emergent intelligence, the transformer is pre trained and does not learn in real time. For AGI, there will need to be some breakthroughs with AI models. They will need to be able to generalize about situations without having to be trained on a particular scenario. A system will also need to do this in real time, just like a human can when they intuitively understand something. In addition, AGI capabilities may need a new hardware architecture such as Quantum computing, since GPUs will probably not be sufficient. Note that Sam Altman has specifically disputed this and said that current hardware will be sufficient. In addition, the hardware architecture will need to be much more energy efficient and not require massive data centers. LLMs are beginning to do causal inference and will eventually be able to reason. They'll also have better problem solving and cognitive capabilities based on the ability to ingest data from multiple sources. So, okay, what's interesting is the degree of agreement that we see among separate experts. You know, they're probably all reading the same material, so there's some degree of convergence in their thinking. But, you know, Altman is an outlier. And it seems to me as though these people know what they're talking about from the things they've said. Perhaps, you know, maybe Sam has already seen things in the lab at OpenAI that no one else in the outside world has seen, because that's what it would take for Sam to not be guilty of over hyping and over promoting his company's near term future. Now, I put a picture in the show notes. You had it on the screen there a second ago, Leo. That is not a mockup. That is not a simulation. This is an actual image of a tiny piece of cerebral tissue. Those are neurons and axons and dendrites. They are the, the, the coloration was added. But that, but those. That is actual human brain tissue in that photo in the show notes. I'm especially intrigued by the comments from the talk that the top academic AI researchers in the world who admit that to this day no one actually understands how large language models produce what they do. Given that, I'm skeptical that just more of the same will result in the sort of qualitative advancement that AGI would require, which is certainly not just more of the same. When I said in the past that I see no reason why a true artificial intellect could not eventually be created, I certainly did not mean next year. I meant someday. I meant that I believe that a biological brain may only be one way to create intelligence. One thing I've acquired during my research into the biology of the human brain is a deep appreciation for the astonishing complexity, I mean astonishing, of the biological computing engine that is us. The number of individual computing neurons in the human brain is 10 to the 11. Okay, so that's 100 billion, 100 billion individual neurons. A billion neurons 100 times over. So, you know, consider that a billion neurons 100 times. And not only are these individual neurons very richly interconnected, typically having connections to 20,000 others. Each individual neuron is all by itself individually astonishingly complex in its behavior and operation. They are far from being simple, integrative binary triggers like, you know, we learned in elementary school. And we have 100 billion of these little buggers in our heads. So perhaps Sam is going to surprise the rest of the world next year. We'll see. Color me skeptical, but not disappointed. As I said, I'm quite happy to have discovered the wonderful language accessible Internet digest that ChatGPT is. You know, that's more than a simple parlor trick. It's a big deal, and it's, I think, kind of magic. But I suspect that all it is is what it is. And for me, that's enough for now. I'd wager that we have a long ways to wait before we get more.