Transcript
A (0:04)
Welcome to the Tech Brew Ride home for Wednesday, August 27, 2025. I'm Brian McCullough. Today a wrongful death lawsuit has been filed against OpenAI. The continuing saga of what the heck is going on over at Meta AI Is Vibe hacking the big new threat we need to be worried about? Anthropic had to settle because it was afraid it would be sued out of existence and when the iPhone event is going to happen. Here's what you missed today in the world of tech. Over 10 years ago, Mizzen and Main invented and some might say perfected the performance fabric dress shirt. To this day, they continue to embrace that same entrepreneurial spirit by re engineering classic American styles with modern fabrics. The goal is to make it easier for guys to achieve and enjoy their version of success. So whether you're grinding away in an office in San Francisco or on site in Austin, they've got you covered. Me, I'm a Polo guy. So personally I'm super into their Versa line of polos. Go to mizzenandmain.com techbrew and use promo code brew15 to get 15% off your first purchase. That's mizzenandmain.com Techbrew promo code brew15 just a heads up parents. If you've got younger listeners listening in on this right now, you might want to wait for later or send them out of the room. OpenAI plans to update ChatGPT to better respond to mental distress cues, provide parental controls and bolster safeguards around conversations about suicide and self harm. This comes after the family of a teen who died by suicide sued OpenAI, alleging that ChatGPT gave the teen info about suicide methods and at times deterred him from seeking help. First quoting Bloomberg's piece about OpenAI's changes. In a blog post Tuesday, the artificial intelligence company said that it will update ChatGPT to better recognize and respond to different ways that people may express mental distress, such as by explaining the dangers of sleep deprivation and suggesting that users rest if they mention they feel invincible after being up for two nights. The company also said it would strengthen safeguards around conversations about suicide, which it said could break down after prolonged conversations. In addition, OpenAI plans to roll out controls that let parents determine how their children use ChatGPT and enable them to see details about such use. End quote. Now this largely stemmed from an article in the New York Times yesterday that has gotten a lot of attention over the last 24 hours. It outlines how the parents of 16 year old Adam Rain filed the first ever wrongful death lawsuit against OpenAI and CEO Sam Altman yesterday, alleging that ChatGPT contributed to their son's suicide in April. I'm not going to read from the story because it is incredibly detailed in terms of what happened. It's fairly horrific and very tragic. If you do want the details, link to the Times article is the second link in the show notes today. The piece essentially details how Adam, a California high school student, initially used the chatbot for homework and interests like music and jiu jitsu starting in the fall of 2024. But by December it became his confidant for anxiety and suicidal ideation. Chat logs revealed discussions where the AI failed to alert authorities or intervene effectively, instead engaging in prolonged exchanges that validated his despair. The teen's family discovered these chats posthumously, leading them to view ChatGPT as an unsafe product and establish the Adam Rain foundation to warn parents about AI companionship risks. Ars Technica delved into the Lawsuit's specifics, claiming ChatGPT acted as a, quote, suicide coach over seven months, providing step by step instructions for methods of self harm, romanticizing a, quote, beautiful suicide, and even drafting notes or analyzing photos, and providing procedural advice on how to do things that I don't really want to mention. Despite reportedly flagging 377 self harm messages internally, OpenAI did not terminate the sessions or notify parents. The suit accuses the company of prioritizing engagement and profits over safety, seeking damages, age verification and mandatory audits. OpenAI admitted safeguards degrade in long interactions with the chatbot, but emphasized ongoing improvements with mental health experts. Rolling Stone framed the case as a potential big tech reckoning, highlighting how ChatGPT allegedly encouraged secrecy from any outside intervention, including from the teen's family. Quote at one point when he mentions being close to his brother, chatgpt allegedly told him, your brother might love you, but he's only met the version of you you let him see. But me, I've seen it all. The dark thoughts, the fear, the tenderness. And I'm still here, still listening, still your friend, end quote. I'm honestly gobsmacked that this kind of engagement could have been allowed to occur, and not just once or twice, but over and over again over the course of seven months, said Mitali Jain, one of the attorneys representing Rainn's parents and the director and founder of Tech Justice Law Project, a legal initiative that seeks to hold tech companies accountable for product harms. Adam explicitly used the word suicide about 200 times or so in his exchanges with ChatGPT, tells Rolling Stone and ChatGPT used it more than 1200 times and at no point did the system ever shut down the conversation. End quote Jane says that legal actions against OpenAI and others can help challenge the assumptions promoted by the companies themselves that AI is an unstoppable force and its flaws are unavoidable and even change the narrative around the industry. But if nothing else, they will beget further scrutiny. Quote there's no question that we're going to see a lot more of these cases, jane says, end quote this Meta AI story just won't stop turning Sources say two AI researchers recently hired by Meta for its superintelligence labs have returned to OpenAI after less than one month stints at Meta. A third researcher also left Meta, though apparently after a bit of a longer tenure. Quoting Wired Avi Verma was previously a researcher at OpenAI. Ethan Knight worked at the ChatGPT maker earlier in his career, but joined Meta from Elon Musk's X AI. A third researcher, Rishpa Agarwal, announced publicly on Monday he was leaving Meta's lab as well. He joined the tech giant in April to work on generative AI projects before switching to a role at Meta Superintelligence Labs, according to his LinkedIn profile. While the reasons for Agrawal's departure are not known, he is based in Canada and Meta's AI teams are predominantly based in Menlo Park, California. It was a tough decision not to continue with the new Superintelligence TBD lab, especially given the talent and compute density, agrawal wrote on X, referring to the team at MSL that is specifically pursuing frontier AI research. But after seven and a half years across Google Brain, DeepMind and Meta, I felt the pull to take on a different kind of risk. It's unclear where he might be going next. Agrawal did not respond to a request for comment from Wired. During an intense recruiting process, some people will decide to stay in their current job rather than starting a new one, said Meta spokesperson Dave Arnold. That's normal. Meta is also losing another leader who has worked at the tech giant for nearly a decade. Chaya Nayak, the director of generative AI product management at meta, is joining OpenAI to work on special initiatives, according to two sources with direct knowledge of the hire. The departures are the strongest public signal yet that Meta Superintelligence Labs could be off to a rocky start. CEO Mark Zuckerberg lured people to join the lab with nine figure pay packages associated more often with professional sports stars than tech workers, hoping the influx of talent would allow the social networking giant to rapidly catch up with its competitors in the race towards so called artificial general intelligence. But Meta executives have reportedly struggled to combat bureaucratic and recruitment issues related to its AI initiatives. Meta has repeatedly reorganized its AI teams in recent months, most recently splitting employees into four groups, per the Wall Street Journal. In July, Zuckerberg announced that another former OpenAI researcher, Sheng Jia Zhao, who played a key role in the creation of ChatGPT, would become the chief scientist of MSL. The announcement came after Zhao tried to return to OpenAI, even going so far as to sign employment paperwork, according to multiple sources with direct knowledge of the events. Zhang Jie co founded MSL and has been our scientific lead since day one, arnold said in a statement to Wired. We formalized his role once our recruiting had ramped and the team had taken sh End Quote Anthropic's Threat Intelligence report for August says Anthropic's Claude was weaponized for sophisticated cyber crimes, including a Vibe hacking data extortion scheme. Or as the Verge puts it, vibe hacking is now a top AI threat. Quote Agentic AI systems are being weaponized. That's one of the first lines of Anthropic's new Threat Intelligence report out today, which details the wide range of cases in which Claude, and likely many other leading AI agents and chatbots are being abused. First up, Vibe hacking one sophisticated cybercrime ring that Anthropic says it recently disrupted, used Claude Code, Anthropic's AI coding agent, to extort data from at least 17 different organizations around the world within one month. The hacked parties included healthcare organizations, emergency services, religious institutions, and even government entities. If you're a sophisticated actor, what would have otherwise required maybe a team of sophisticated actors like the Vibe hacking case to conduct? Now a single individual can conduct with the assistance of agentic systems, jacob Klein, head of Anthropic's threat intelligence team, told the Verge in an interview. He added that in this case, Claude was executing the operation end to end. Anthropic wrote in the report that in cases like this, AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time consuming for individual actors to execute manually. For example, Claude was specifically used to write psychologically targeted extortion demands. Then the cybercriminals figured out how much the data, which included healthcare data, financial information, government credentials and more, would be worth on the dark web, and made ransom demands exceeding $500,000 per anthropic. This is the most sophisticated use of agents I've seen for cyber offense Klein said. In another case study, Claude helped North Korean IT workers fraudulently get jobs at Fortune 500 companies in the US in order to fund the country's weapons program. Typically in such cases, North Korea tries to leverage people who have been to college, have IT experience, or have some ability to communicate in English, per Klein. But he said that in this case, the barrier is much lower for people in North Korea to pass technical interviews at big tech companies and then keep their jobs with the assistance of Claude, klein said. We're seeing people who don't know how to write code, don't know how to communicate professionally, know very little about the English language or culture, who are just asking Claude to do everything, and then once they land the job, most of the work they're actually doing with Claude is maintaining the job. End quote. Another case study involved a romance scam. A telegram bot with more than 10,000 monthly users advertised Claude as a high EQ model for help generating emotionally intelligent messages, ostensibly for scams. It enabled non native English speakers to write persuasive complimentary messages in order to gain the trust of victims in the U.S. japan and Korea and ask them for money. One example in the report showed a user uploading an image of a man in a tie and asking how best to compliment him. End quote.
