Steve Gibson (135:13)
I should note that I already have everything I need with thanks to today's chat GPT4O and it has changed my life for, for the better. I've been using it increasingly as a time saver in sort of in the form of a programming language, super search engine and, and even a syntax checker. I've used it sort of as a crutch when I need to quickly write some throwaway code in a language like PHP where I do not have expertise, but I want to get something done quickly. I just, you know, I'd like, you know, get, solve a quick problem, you know, parse a text file in a certain way into a different format, that sort of thing. In the past, I would take, you know, if it was a somewhat bigger project than that, an hour or two putting queries into Google, following links to Programmer's Corner or Stack Overflow or other similar sites, and I would piece together the language construction that I needed from other similar bits of code that I would find online or if I was unable to find anything useful like, you know, solve the problem, I would then dig deeper in through the languages, actual reference texts to find the usage and the syntax that I needed and then build up from that, you know, because, you know, after you've programmed a bunch of languages, they're all sort of the same largely. I mean, Lisp is a different animal entirely, as is apl, but, but you know, the, the, the procedural languages, it's just a matter of like, okay, what do I use for inequality? What do I use for, you know, how, how exactly are the looping constructs built, that kind of thing, that's no longer what I do because I now have access to a, what I consider a super programming language search engine. Now I ask the experimental coding version of Chat GPT for whatever it is I need. I don't ask it to provide the complete program since that's really not what I want. You know, I love coding in any language because I love puzzles and puzzles are language agnostic, but I do not equally know the details of every other language. There's nothing Chat GPT can tell me about programming assembly language that I have not already known for decades. But if I want to write a quick throwaway utility program like in Visual Basic.net a language that I've spent very little time with and because I like to write an assembly language, you know, but I need to, for example, quickly implement an associative Array, as I did last week, rather than poking around the Internet or scanning through the Visual Basic syntax to find what I'm looking for. I'll now just pose the question to ChatGPT. I'll ask it very specifically and carefully for what I want, and in about two seconds I'll get what I may have previously spent 30 to 60 minutes sussing out online. It has transformed my work path for those sorts of. For that class of problem that I've traditionally had. It's useful whenever I need some details where I do not have expertise is that I think the way I would put it. And I've seen plenty of criticism levied by other programmers of the code Produced by Today's AI. To me it seems misplaced, I.e. their criticism seems misplaced and maybe just a bit nervous and maybe they're also asking the wrong question. I don't ask ChatGPT for a finished product because I know exactly what I want and I'm not even sure I could specify the finished product in words or that that's what it's really good for. So I ask it just for specific bits and pieces and I have to report that the results have been fantastic. I mean it is literally, it's the way I will now. Code languages, I don't know, I think is probably the best way to put it. It's ingested the Internet and you know, obviously we have to use the term it knowing them very advisedly. It doesn't know them, but whatever it is, I am able to like ask it a question and I actually get like really good answers to tight problem domain questions. Okay, but what I want to explore today is what lies beyond what we have today, what the challenges are and what predictions are being made about how and when we may get more, whatever that more is. You know, the there where we want to get is generically known as artificial general intelligence, which is abbreviated AGI. Okay, so let's start by looking at how Wikipedia defines this goal. Wikipedia says artificial general intelligence is a type of artificial intelligence that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence ASI, on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI. They say creating AGI is a primary goal of AI research and of companies such as OpenAI and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries. The timeline for Achieving AGI remains a subject of ongoing debate among researchers and experts as of 2023. Some argue that it may be possible in years or or decades, others maintain it might take a century or longer, and a minority believe it may never be achieved. Notable AI researcher Geoffrey Hinton has expressed concerns about the rapid progress toward AGI, suggesting it could be achieved sooner than many expect. There's debate on the exact definition of AGI and regarding whether modern large language models LLMs such as GPT4 are early forms of AGI. Contention exists over whether AGI represents an existential risk. Many experts on AI have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be too remote to present such a risk. AGI is also known as strong AI, full AI, human level AI, or general intelligent action. However, some academic sources reserve the term strong AI for computer programs that experience sentience or consciousness. In contrast, weak AI or narrow AI is able to solve one specific problem but lacks general cognitive abilities. Some academic sources use weak AI as the term to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans. Related concepts include artificial superintelligence and transformative AI, and artificial superintelligence is a hypothetical type of AGI that is much more generally intelligent than humans. While the notion of transformative AI relates to AI having a large impact on society, thus transforming it. For example, similar to the agricultural or industrial revolutions, a framework for classifying AGI levels was proposed in 2023 with Google DeepMind researchers or by Google DeepMind researchers. They define five levels of AGI emerging, competent, expert, virtuoso, and superhuman. They define, for example, a competent AGI is defined as an AGI that outperforms 50% of skilled adults in a wide range of non physical tasks and a superhuman AGI. In other words, an artificial superintelligence is similarly defined, but with a threshold of 100%. They consider large language models like Chat, GPT, or Lama2 to be instances of the first level emerging AGI. Okay, so we're getting some useful language and terminology for talking about these things. The article that caught my eye last week as we were celebrating the thousandth episode of this podcast was posted on perplexity AI titled Altman predicts AGI by 2025. The perplexity piece turned out not to have much meat, but it did offer the kernel of some interesting thoughts and some additional terminology and talking points, so I still want to share it. Perplexity wrote OpenAI CEO Sam Altman, has stirred the tech community with his prediction that artificial general intelligence AGI could be realized by 2025, a timeline that contrasts sharply with many experts who foresee AGI's arrival much later. Despite skepticism, Altman asserts that OpenAI is on track to achieve this ambitious goal, emphasizing ongoing achievements and substantial funding, while also suggesting that the initial societal impact of AGI might be minimal. In a Y Combinator interview, Altman expressed excitement about the potential developments in AGI for the coming year. However, he also made a surprising claim that the advent of AGI was would have surprisingly little impact on society, at least initially. This statement has sparked debate among AI experts and enthusiasts, given the potentially transformative nature of AGI. And Altman's optimistic timeline stands in stark contrast to many other experts in the field who typically project AGI development to occur much later, around 2050. Despite the skepticism, Altman maintains that OpenAI is actively pursuing this ambitious goal, even suggesting that it might be possible to achieve AGI with current hardware. This confidence, coupled with OpenAI's recent 6.6 billion funding round and its market valuation exceeding 157 billion billion, underscores the company's commitment to pushing the boundaries of AI technology. Achieving Artificial General Intelligence faces several significant technical challenges that extend beyond current AI capabilities. So here we have four bullet points that outline what AGI needs that there's no sign of today. First, common sense reasoning. AGI systems must develop intuitive understanding of the world, including implicit knowledge and unspoken rules to navigate complex social situations and make everyday judgments. Two, context awareness. AGI needs to dynamically adjust behavior and interpretations based on situational factors, environment, and prior experiences. Third, handling uncertainty. AGI must interpret incomplete or ambiguous data, draw inferences from limited information, and make sound decisions in the face of the unknown. And fourth, continual learning. Developing AGI systems that can update their knowledge and capabilities over time without losing previously acquired skills remains a significant challenge. So one thing that occurs to me as I read those four points reasoning, contextual awareness, uncertainty, and learning is that none of the AIs I've ever interacted with has ever asked for any clarification about what I'm asking. That's not something that appears to be wired into the current generation of AI. I'm sure it could be simulated, you know, if it would further raise the stock price of the company doing it. But it wouldn't really matter, right? Because it would be a faked question like that very old Eliza pseudotherapist program from the 70s. You know, you would type into it. I'm feeling sort of cranky today. And it would reply, why do you think you're feeling sort of cranky today? You know, it wasn't really asking a question. It was just programmed to seem like it was, you know, understanding what we were typing in. The point I hope to make is that there's a hollowness to today's AI. You know, it's truly an amazing search engine technology, but it doesn't seem to be much more than that to me. There's no presence or understanding behind its answers. The Perplexity article continues, saying overcoming these hurdles requires advancements in areas such as neural network architectures, reinforcement learning, and transfer learning. Additionally, AGI development demands substantial computational resources and interdisciplinary collaboration among experts in computer science, neuroscience, and cognitive psychology. While some AI leaders like Sam Altman predict AGI by 2025, many experts remain skeptical of such an accelerated timeline. A 2022 survey of 352 AI experts found that the median estimate for AGI development was around 2060, also known as security now episode 2860 90% of the 352 experts surveyed expect to see AGI within 100 years. 90% expect it so not to take longer than 100 years, but the median is is by 2060, so you know, not next year as Sam suggests, they wrote. This more conservative outlet stems from several key challenges. First, the missing ingredient problem. Some researchers argue that current AI systems, while impressive, lack fundamental components necessary for general intelligence. Statistical learning alone may not be sufficient to achieve AGI. Again, the missing ingredient problem. I think that sounds exactly right. Also, training limitations. Creating virtual environments complex enough to train an AGI system to navigate the real world, including human deception, presents significant hurdles. And third, scaling challenges. Despite advancements in large language models, some reports suggest diminishing returns in improvement rates between generations. These factors contribute to a more cautious view among many AI researchers who believe AGI development will likely take decades rather than years to achieve. OpenAI has recently achieved significant milestones in both technological advancement and financial growth. The company successfully closed, and here they're saying again, a massive 6.6 billion funding round valuing at $157 billion. But you know, who cares? That's just, you know, Sam is a good salesman, they said. This round attracted investments from major players like Microsoft, Nvidia and SoftBank, highlighting the tech industry's confidence in OpenAI's potential. The company's flagship product, Chat GPT, has seen exponential growth, now boasting over 250 million weekly active users. And you count me among them, OpenAI has also made substantial inroads into the corporate sector, with 92% of Fortune 500 companies reportedly using its technologies. Despite these successes, OpenAI faces challenges including high operational costs and the need for extensive computing power. The company is projected to incur losses of about $5 billion this year, primarily due to the expenses associated with training and operating its large language models. So when, when I was thinking about this idea of, you know, we're just going to throw all this money at it and, and it's going to solve the problem and, oh, look, you know, the solution is going to be next year, the analogy that hit me was curing cancer, because there sort of is an example of, you know, oh, look, we just, we had a breakthrough and this is going to, you know, cure cancer. It's like, no, we don't really understand enough yet about human biology to, to say that we're going to do that. And, you know, I know that the current administration has been, you know, these cancer moonshots and it's like, okay, have you actually talked to any biologists about this? Or do you just think that you can pour money on it and it's going to do the job? So that's not always the case. So to me, this notion of the missing ingredient is, is the most salient of all of this, is like, what we may have today has become very good at doing what it does, but it may not be extendable. It may never be what we need for AGI. But I think that what I've shared so far gives a bit of calibration about where we are and what the goals of agr, of AGI are.