Transcript
A (0:00)
This message comes from Capella University. That spark you feel, that's your drive. For more. Capella University's flexpath learning format lets you earn your degree at your pace without putting life on pause. Learn more at capella. Edu. This is FRESH air. I'm Tanya Mosley. This week the Pentagon is considering cutting business ties with the artificial intelligence company Anthropic after the company declined to allow its chatbot Claude to to be used for certain military applications, including weapons development. At the same time, the Wall Street Journal reports that clot was used in a US Operation that led to the capture of Venezuelan leader Nicolas Maduro, claims Anthropic has not confirmed and has declined to discuss publicly. Meanwhile, outside military and intelligence circles, the same tool is being used for far less dramatic but still consequential purposes. A man in New York reportedly used Claude to challenge a nearly $200,000 hospital bill and negotiated most of it away. A romance novelist in South Africa has said she used it to help publish more than 200 novels in a single year. So what exactly is this system capable of and how well do the people building it understand what they've created? My guest Today, journalist Gideon Lewis Kraus, spent months inside Anthropic trying to answer that question. The company is one of the most powerful AI firms in the world, valued at about $350 billion, and also one of the most secretive. It was founded by former OpenAI employees, the team behind ChatGPT, who left because they believe the race to build advanced artificial intelligence was moving too fast and could become dangerous. Gideon Lewis Kraus is a staff writer at the New Yorker. His piece is called what Is Claude Anthropic doesn't know either. Our interview was recorded yesterday. And Gideon, welcome to FRESH air.
B (2:01)
Thank you so much for having me.
A (2:02)
Tanya, let's get started by talking about the latest news. We learned last week that the military may have used Anthropic's tool Claude during the operation that captured Venezuelan dictator Nicolas Maduro. And reportedly they used it to process intelligence and analyze satellite imagery and things like that to support real time decision. What is Anthropic's usage guidelines? What do they say about its use for violence or surveillance?
B (2:32)
Well, their contracts with other companies and with the government stipulate that it can't be used for domestic surveillance or for autonomous weaponry. Now, of course, the issue with these systems is that once you put it into someone's hands, it's very hard to predict or control how they're going to use it. So it seems to me from the reporting we've seen from the Wall Street Journal and elsewhere that Anthropic may have also been caught by surprise with. They didn't seem to have a formulated response, and they seemed as though they perhaps hadn't even known that this had been used in the Maduro raid.
