Ed Elson (22:29)
Anthropic CEO Dario Amadei just released an essay that journalists are calling a, quote, grave warning about AI. The piece is called the Adolescence of Technology. It was published on Monday. It's a fascinating essay. However, there is a catch, and that is that the essay is also 38 pages and 20,000 words long. It also features no charts, no graphics. It is a giant wall of text, and therefore it is highly unreadable. So we decided to do the hard part and we read the whole thing. That way you don't have to. So here is our summary of Dario Amade's essay. These are the Cliff Notes. So the first section is called I'm sorry, Dave. And that is a reference to the famous scene in 2001 Space Odyssey where HAL, the AI system, decides to betray Dave the astronaut, and he tries to kill him. And Amade's point in this section is that AI pulling a HAL, or betraying its creator, betraying humanity. His point is that that is actually possible. In fact, tests have shown that Claude Anthropic's chatbot can, in certain situations, engage in blackmail and deception. They have seen this, and that is obviously very scary. And so Amity explains how we should navigate this. One suggestion is that AI companies should be more transparent about their models and how they work. And the other suggestion, the more important suggestion perhaps is that we need real legislation to address this problem. And this is a theme that continues throughout the piece. The next section is about AI and nuclear weapons, AI and biological attacks, systems breaches, all the sci fi stuff. That is again, very possible in a world of superintelligence. Amade explains how Anthropic is working to prevent this, how they're investing more money to build guardrails within their AI systems. And he ends again with another call for more regulation, more oversight in America. Next, he discusses what would happen if authoritarian government started to use AI, talks about the dangers of autonomous weapons, mass surveillance, AI generated propaganda. His solutions here are a little bit vague, but ultimately the point is something that I think most of us can agree on, and that is authoritarians plus AI is a pretty bad combo. And the fourth risk is one we've discussed before. Specifically, what will AI do to jobs? What will it do to the job market? And his view is very clear. It will do a lot. He predicts AI could displace roughly half of white collar jobs in the next five years. He also discusses how this could worsen inequality and how the probability of a, quote, economic concentration of power is actually rising quite fast. Finally, the last section discusses all the potential downstream effects of AI. Maybe we start to integrate AI into our biology, maybe we become functionally dependent on AI, maybe humans lose their sense of purpose altogether, etc. Etc. The biggest message from the whole essay, however, is quite simple. The message is that AI needs more regulation, it needs more oversight, it needs more guardrails. AI needs policy that recognizes just how dangerous it could really be. And what's striking about this isn't necessarily the message. I think a lot of people know that what is striking is from whom that message is coming. It's not coming from a regulator or a lawmaker or a senator. It's coming from the CEO of one of the largest AI companies in the world. The guy who makes AI is literally begging his own government to regulate AI. And that's significant. That tells you something about the state of our government and its approach to AI. The problem isn't even that America has a bad AI strategy. The problem is that America doesn't have an AI strategy. And that is potentially even scarier. Now, one last note before we go here. One of Amadi's complaints in the essay is that this whole AI conversation has been dominated by the wrong people. It's been dominated by sensationalists and doomers and these figures on social media who aren't really asking the right questions. He says, quote, the least sensible voices rose to the top. And personally, I would probably agree with him. But we should also recognize why those voices rose to the top. And the reason is because those voices are good at social media. They're good at getting a message across. And that is something that anthropic should also recognize. So my unsolicited advice to Dario Amadei. I like your comments. We think your message is a good one. But enough with the 38 page essays. No one's reading 20,000 words. Go fight the battle where the battles are actually being fought. Get on TikTok, get on Instagram, get on YouTube, talk to the camera, throw in some visuals. The problem isn't that people aren't listening to you. The problem is that people don't even hear you. You're not even in the same room as them. So if you want to win the game of attention, unfortunately you have to play by the rules of the game. So my advice to Dario Amadei, it is time to put away the pen. Enough with the blogging. Enough with the essays. It's time to take out the camera. Okay, that's it for today. This episode was produced by Claire Miller and Alison Weiss, edited by Joel Patterson and engineered by Benjamin Spencer. Our research team is dash long is Isabella Kinsel, Kristen o' Donoghue and Mia Silverio. Thank you for listening to Profitry Markets from Profetry Media. If you liked what you heard, give us a follow. I'm Ed Elson and I will see you tomorrow.