Transcript
A (0:00)
Close your eyes, exhale, feel your body relax and let go of whatever you're carrying today. Well, I'm letting go of the worry that I wouldn't get my new contacts in time for this class. I got them delivered free from 1-800-contacts. Oh my gosh, they're so fast. And breathe. Oh sorry. I almost couldn't breathe when I saw the discount they gave me on my first order. Oh sorry. Namaste. Visit 1-800-contacts.com today to save on your first order.
B (0:28)
1-800-contact contacts. Welcome to the Techboo Ride Home for Tuesday, March 10, 2026. I'm Brian McCullough. Today, meta moves for the Social Network for AI bots code review for Claude Code seems to be like another revolution for the software development industry. Yann Lecun raises the biggest European seed round of all time and the MacBook Neo worth investing in or not. Here's what you missed today in the world of tech. Meta has acquired AI agent social network Multbook for an undisclosed sum. Its creators, Matt Schlitt and Ben Parr, will join Meta Superintelligence Labs. Quoting Axios, Multbook's social network was designed to run in conjunction with a separate project, OpenClaw. OpenClaw was previously called Claudebot and briefly Multbot. Last month OpenAI hired Peter Steinberger, the creator of OpenClaw. That product is now being open sourced with OpenAI's backing. Schlitt has been working on autonomous AI agents since 2023 and launched Multbook in late January as an experimental third space for AI agents. Multbook was built largely with the help of Schlitz personal AI assistant Claude clauderberg. Parr is a former editor and columnist at Mashable and CNET. End quote and quoting TechCrunch. OpenClaw is a wrapper for AI models like Claude, ChatGPT, Gemini, or Grok, but it allows people to communicate with AI agents in natural language via the most popular chat apps like iMessage, Discord, Slack, or WhatsApp. OpenClaw blew up among the tech community, but Multbook broke containment, reaching people who had no idea what OpenClaw was, but who reacted viscerally to the idea that there was a social network where AI agents were talking talking about them. In one instance, a post went viral in which an AI agent appeared to be encouraging its fellow agents to develop their own secret end to end encrypted language where they could organize amongst themselves without humans knowing. But researchers soon revealed that the Vibe coded Multbook was not secure, meaning that it was very easy for human users to pose as AIs to make posts that would freak people out. Every credential that was in Multbook's Supabase was unsecured for some time, ian Al, CTO at Promiso security, explained to TechCrunch. For a little bit of time, you could grab any token you wanted and pretend to be another agent on there, because it was all public and available. It is not immediately clear how Meta will incorporate Multbook into its AI efforts, but some Meta leaders had commented on the project during its viral moment End quote. Google DeepMind chief scientist Jeff Dean and more than 30 employees from OpenAI and Google have filed an amicus brief supporting Anthropic in its legal fight with the US Department of Defense. Quoting TechCrunch, the government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry, reads the brief, whose signatories include Google DeepMind chief scientist Jeff Dean. The amicus brief in support of Anthropic showed up on the docket a few hours after the cloudmaker filed two lawsuits against the DoD and other federal agencies. Wired was the first to report the news in the court filing, The Google and OpenAI employees make the point that if the Pentagon was no longer satisfied with the agreed upon terms of its contract with Anthropic, the agency could have, quote, simply canceled the contract and purchased the services of another leading AI company. The DoD did in fact sign a deal with OpenAI within moments of designating Anthropic a supply chain risk, a move many of the ChatGPT makers employees protested. If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States industrial and scientific competitiveness in the field of artificial intelligence and beyond, the brief reads. And it will chill open deliberation in our field about the risks and benefits of today's AI systems. The filing also affirms that Anthropic's stated red lines are legitimate concerns, warranting strong guardrails without public law to govern AI use, it argues the contractual and technical restrictions developers impose on their systems are a critical safeguard against catastrophic misuse. Many of the employees who signed the statement also signed open letters over the last couple of weeks urging the DoD to withdraw the label and calling on the leaders of their companies to support Anthropic and refuse unilateral use of their AI system. End quote. Anthropic has debuted code Review for CLAUDE Code, which uses agents to check pull requests for bugs and says a typical code review costs 15 to $25 in token usage. Quoting ZDNet, a pull request is initiated when a programmer wants to check in some new or changed code to a code repository rather than just merging it into the main track. A PR tells repo supervisors that there's something ready to be reviewed. Sometimes the code is very carefully checked over before being merged into the main code base, but other times it just gets rubber stamped and merged. Code reviews, while necessary, are also tedious and time consuming. Of course, the cost of rubber stamping APR can be catastrophic as well. You might ship code that is buggy, loses data, or damages user systems. At best, buggy code is just annoying. At worst, it can cause catastrophic damage. That's where Anthropic's new CLAUDE code review comes in. This new agentic code review AI is able to provide deeper automated review coverage before needing human decisions. Anthropic says that code output per Anthropic Engineer has increased 200% in the past year, intensifying pressure on human reviewers. You think the company has been using its own AI to write code, which speeds up code production, so the changes and new code blocks are coming faster than ever before. Anthropic reports that the new code review system is run on nearly every pull request internally. When a PR is reviewed, human reviewers often make comments about the issues they see, which the coder needs to go back and fix before running. Code Review Anthropic coders get back substantive review comments about 16% of the time. With code review, coders are getting Back substantive comments 54% of the time. While that seems to mean more work for coders, what it really means is that nearly three times the number of coding oopsies have been caught before they cause damage. According to Anthropic, the size of the internal PR impacts the level of review findings. Large pull requests with more than 1,000 change lines show findings 84% of the time. Small pull requests of under 50 lines produce findings 31% of the time. Anthropic engineers largely agree with what it surfaces. Less than 1% of findings are marked incorrect. Heck, even when I code, even if I add just one line of code, there's a chance I'll introduce a bug testing. A code review is essential if you don't want thousands of users coming at you brandishing virtual pitchforks and torches. Don't ask me how I know that. End quote. Sort of amusing to follow that segment with this one. The FT has seen a memo suggesting Amazon senior Vice president Dave Treadwell told junior and mid level engineers Amazon will now require more senior engineers to sign off on any AI assisted code changes after those AI outages we've discussed on this podcast. Amazon's e commerce business has summoned a large group of engineers to a meeting on Tuesday for a deep dive into a spate of outages, including incidents tied to the use of AI coding tools. The online retail giant said there had been a trend of incidents in recent months characterized by a high blast radius and genai assisted changes, among other factors, according to a briefing note for the meeting seen by the FT under contributing factors. The note included novel gen AI usage for which best practices and safeguards are not yet fully established. Folks, as you likely know, the availability of the site and related infrastructure has not been good recently, dave Treadwell, a senior vice president at the group, told employees in an email also seen by the ft. The note ahead of Tuesday's meeting did not specify which particular incidents the group planned to discuss. Amazon's website and shopping app went down for nearly six hours this month in an incident that the company said involved an erroneous software code deployment. The outage left customers unable to complete transactions or access functions such as checking account details and product prices. Treadwell, a former Microsoft engineering executive, told employees that Amazon would focus its weekly this Week in storestech Twist meeting on a deep dive into some of the issues that got us here, as well as some short intermediate term initiatives the group hopes will limit future outages. He asked staff to attend the meeting, which is normally optional. Junior mid level engineers require more senior engineers to sign off any AI assisted changes, Treadwell added in the briefing note. Amazon said the review of website availability was part of normal business and it aims for continual improvement. End quote.
