Transcript
A (0:00)
Fiscally responsible financial geniuses, Monetary magicians. These are things people say about drivers who switch their car insurance to Progressive and save hundreds because Progressive offers discounts for paying in full, owning a home and more. Plus, you can count on their great customer service to help when you need it. So your dollar goes a long way. Visit progressive.com to see if you could save on car insurance, Progressive Casualty Insurance Company and affiliates. Potential savings will vary. Not available in all states or situations.
B (0:33)
When you manage procurement for multiple facilities, every order matters, but when it's for a hospital system, they matter even more. Grainger gets it and knows there's no time for managing multiple suppliers and no room for shipping delays. That's why Grainger offers millions of products in fast, dependable delivery so you can keep your facility stocked, safe and running smoothly. Call 1-800-GRAINGER click granger.com or just stop by Grainger for the ones who get it done.
C (1:03)
Welcome to the podcast. I'm your host Jaden Shafer. Today on the podcast we're talking about a new tool that Anthropic has just launched. Basically we have this issue where 70 in some companies, 90% in other companies, percent of all of their code is being generated by AI. And anthropic has just launched a new co review tool that is going to be able to check to this massive flood of AI generated code to see what's good, what's not. And I think this is going to be awesome for developers, but also for all of us users of the software. There's a lot of cool implications and a lot of stuff that I am excited about. So I want to break down everything going on here because I think we're about to get a lot less buggy software. A lot of the software is going to get a lot more usable. Developers are obviously going to be rejoicing, but there's also some pullbacks to all of this. So I'm going to talk about all of that. Before we do, I actually have a request to make. This week is actually my birthday week. I am turning 30. I'm super excited. It's crazy. It feels weird turning 30. But there is one request I would ask for my birthday if you would not mind. And this is something that I'm not going to beg you for the rest of my life for. But for my birthday week, this is the thing. I'm not going to plug my company AI box. I'm just going to ask for this. If you could leave a rating and review on this show for my birthday week, it would Be amazing. This is something I've spent the last three years of my life, almost every day uploading a podcast episode to this. So if you've gotten any value at any point in the last three years, if you're a new listener, if you haven't already, this is, this is the time to do it. It is my birthday week. I'm turning 30. I would super, super appreciate a review on the podcast. And as a celebration, and I don't know what you want to call this, but as a fun way to say thank you, I will actually be reading the most recent reviews, the good and the bad, the five star and star reviews that I've gotten, and be sort of. I'll give you a quick response time. This is something I don't usually do, especially if I get a one star review. I'm not going to sit there and argue with the person. If people want to move on from the show, that's cool. If you get value out of it, it's cool. But because we're doing this for this one week, this is kind of like review week. This is what I'm dubbing it, I'm going to read it. So we're kicking this off with one of my most recent reviews I got. This was on March 2nd and it is a one star review. So fair warning. This is a one star review and this is what it said. It said. It's from Hammacham and he says, stop the Islamophobia. When was the last time you heard about Saudi Arabia being an enemy to the US this is a one star review. I think this review is specifically responding to my opening. I steals $200 million contract in anthropic versus Pentagon battle. And basically what happens? Well, you guys all know there's. I think there's a lot of emotions are high. We have Anthropic that has this whole battle with the Pentagon and, and then OpenAI comes in and jumps in and steals it. And this is like right before Iran gets invaded. I'm not exactly sure what I said in this podcast that got, I don't know, made this person so upset to say it was Islamophobic. I think, I mean, evidently from this, I was probably criticizing the country of Saudi Arabia, which, by the way, I think Saudi Arabia generally is like a good partner to the US as an ally. We buy all of their oil. Even if you hate them because of how they're government is set up. We, we buy their oil, we use their oil. So we get a lot of value out of that. Partnership, we send them a lot of military supplies that are kind of an ally in that region. So, you know, generally I'm happy with that. And I actually almost took funding from a huge Saudi Arabian kind of like a. Of incubator over there and actually almost went and moved to Saudi Arabia for three months. My, my wife, we got, we got a few kids. So my wife at the end of the day didn't want to have to go to a apartment in Saudi Arabia for a few months for that program. So never I'm doing it. But you know, I've considered it. I think Saudi Arabia is generally good. The only response I'll say on that is obviously whatever I said wasn't Islamophobic since I'm not as long phobic. I think you know all people with all their beliefs and religions. Awesome. Since I have my own. But what I will say is I would just encourage that person or anyone listening. Like, don't get misconstrued if I'm going to criticize the country of Saudi Arabia, especially when I'm criticizing countries in relation typically to like AI policy into being Islamophobic or like disliking your culture or whatever, I don't know, I just think that's pretty, a pretty shallow take. I'm going to criticize every government if I think they're not doing something smart, including the US government. My goal is to be unbiased and academically honest. All right, thanks for listening to my rant. If you could leave a comment or review for this one review week for my birthday, I would super appreciate it. Let's get into the episode. So I think right now peer feedback has been one of the most important but kind of tricky and it's kind of the safeguard basically in software development. It helps teams catch bugs early and you can also keep your consistency across your whole code base. You can improve the overall quality of all of the software that you're shipping. This is something that we see with my startup AI box all the time. I think right now we're doing all of this vibe coding. Even myself, I have tons of vibe coded projects on the side. Unfortunately, it's sometimes hard to productize them because of tricky, nasty bugs in there. And if you're not a developer, it's hard to catch, find and fix them. And so I think where developers use a lot of AI tools to generate like, you know, Claude code or any of these other players Codex from OpenAI, we're generating tons of code right now and it's also really cheap and really fun and really fast. However, I think a lot of these tools can, you know, beyond just speeding up development, they can also give a whole bunch of hidden bugs, security risks, and basically code that developers don't fully understand. So then it's hard to understand all those kind of hidden bugs and security risks. Anthropic is building something they think is going to be the solution for this, which is, personally I'm super stoked about. I use Claude code on my startup AI box and so this is a new AI that can review the AI generated code. They're calling this code review. It's built inside of Claude code and it's essentially designed to automatically analyze pull requests and then it's going to flag any potential risks or issues before they actually make it into production. This is what they said about it. This is Anthropic's head of product. This is Kat Wu said, we've seen a lot of growth in cloud code, especially within the enterprise. We one of the questions we keep hearing from enterprise leaders is now that CLAUDE code is generating a huge number of pull requests, how do we review them efficiently? Pull requests are basically just the way that developers are going to submit code changes for review before they're merged into a project. But Wu says that AI assisted coding has dramatically increased the volume of those requests, which is kind of creating a new bottleneck. And to be honest, I actually have heard this. It was funny, there was a moment with openclaw that, you know, went mega viral. It's kind of this agent that can run on its own computer and take over and do all these tasks for you. OpenClaw, the founder, was like a one man team running this thing gets acquired by OpenAI because it went super mega viral and so many people were using it. And it's funny because even after the acquisition, I remember seeing him post on X and say, hey guys, like you're putting in so many because it was open source, right? So anyone can kind of like submit code to make improvements to the project. Which is a super cool, you know, super cool that, that he built it that way. But he was saying, look guys, like it went so viral, I'm getting like so bogged down by trying to review all of the code you guys are submitting. And he like had like a certain amount of pull requests. He said he was able to basically review every day, but he was going, you know, full speed trying to get as many done as he possibly could. And it was a huge struggle and basically very, very difficult. So in any case, this is Definitely a huge problem for I think a lot of people, especially when you kind of look at some of this open source stuff. Some open source communities won't even allow AI generated co. I don't think that's like the most common stance, but I think it's just hard for them to always know what's going to have bugs or what's going to have issues and to properly review it all because people could just try to push so much. So this new feature is going to launch in a research preview for Claude for teams and also Claude for enterprise customers. It's going to, I think, come at a pretty important moment for Anthropic. Obviously, like I was mentioning earlier on in the podcast, they have this big, huge, high profile dispute with the US Department of Defense. They got designated supply chain risk, they filed a couple lawsuits to kind of, I don't know, fight that. So Anthropic has a big moment right now. A lot of people are looking at them. I think at the same time Anthropic is saying that their enterprise business is booming. Subscriptions have quadrupled since the start of this year. Like they are on an absolute tear. Claude codes run rate revenue has already passed $2.5 billion, which is insane because it was actually one of their developers over at Anthropic that kind of built it as a side project and now, you know, it's doing more than $2.5 billion. It's, it's run rate revenue according to code review is going to kind of be, kind of be aimed at basically like for the most part large engineering organizations that are already using cloud code. Companies like Uber, Salesforce, Accenture, all of those are already using it. And engineering leads are going to be able to enable the feature for their teams, which basically allows it to automatically analyze every pull request once you turn it on. And then the system is going to integrate with GitHub and it's going to leave comments directly on the code which is going to point out any issues and basically suggest fixes. So, you know, like a human developer coming through, instead of having to, you know, manually code review all these things themselves, they're just going to see Claude has come through, skimmed it, written a code review, highlighted any issues, kind of pointed out and given notes and they can go review just those notes or any sort of points of interest or concern that it might have. So I think unlike a lot of other automated code tools that mostly focus heavily on formatting or style, Anthropic is intentionally designing code review to focus on logical Errors, which is interesting. Wu was commenting on this and said that's really important. A lot of developers have seen automated feedback before and they get annoyed when it's not immediately actionable. We decided to focus purely on logic errors so we're catching the highest priority problems. I think when an AI is going to identify an issue, it basically explains its reasoning step by step. So it's going to actually outline what it believes the problem is. And then it's going to, you know, say like this is why it matters, this is how it could be fixed. And by doing this, issues are going to be also labeled in severity. So there's going to be, it's like they're basically going to color coordinate it. Red is like the critical problems. Yellow is potentially an issue. Purple is bugs that are kind of tied to historical or legacy code. So they kind of have this like color coding. You can skim through it. It's, they're trying to make this fast and easy for developers to, to you know, make their, make their workflow more as basically streamline it all. I think under the hood the system is going to use this multi agent architecture, which is important, right? It's not just one agent. They have multiple agents running through this. A couple of the AI agents are going to analyze the code base in parallel. So it's not just like, you know, you, you, you run this thing once and got to wait for it to go finish. Like there's multiple agents running through different parts of this at the same time. They're going to be examining pull requests from different perspectives. Then there's going to be a final agent that aggregates the findings. It's going to remove any duplicates. Right? Because like if two agents are running through and they both see a security finding and maybe it's, you know, kind of related to two different sections and they both report it. There's going to be one agent that just kind of, you know, merges those two together. It's going to remove the duplicates and then it's going to rank the most important issues. The tool is also performing kind of a light security analysis. I think they're, they intentionally want to say, you know, look guys, this is a quote unquote light security analysis. They don't want people to get overly confident that this is going to like fix all security that could ever happen from this AI generated code. But yeah, I think it is important that we're starting to have this conversation because this is something that absolutely is an issue in the industry. Engineering teams are then going to be able to customize any sort of additional checks based on their own internal standards. Which is cool, right? It's beyond just like, hey, we built a tool that can do this for you. It's like, well, do you guys have anything that you, you know frequently to check inside of your code or inside of your industry? You can go add those to it. And then I think for deeper security reviews, Anthropic also has a separate product called Claude Code Security that can go even deeper on all of that. I think because the system is running, you know, multiple agents simultaneously, Cloud review can be basically pretty compute, like computationally intensive, like it's going to use a lot. The pricing follows the same token based structure that they use for all of their AI services. So basically the costs are going to depend on the size and complexity of the code that's being analyzed. They're kind of estimating right now that the average review is going to cost like 15 to $25. And of course their argument there is that this is some sort of increased cost. But I mean, come on, if you were to go and hire an analyst or any sort of developer or any sort of security researcher to do something, this would be hundreds or thousands or tens of thousands of dollars, not 15 or $25. So some significantly bringing this down. Again there's, there's a couple interesting thoughts from Wu who said this is coming from an enormous amount of market demand. As engineers build with cloud code, the friction to create new features drops dramatically, but the need for code review increases. Our goal is to help enterprises build faster than ever while shipping far fewer bugs. I'm excited for this. Personally I, I think a lot of different of these kind of like vibe coding tools are. No, this is an issue. I use Lovable a lot to vibe code things and it has a built in security feature where it scans your whole project and it kind of highlights different security issues and, and you can go and apply and kind of have them fix some of those issues or it tells you what to do to fix them. I think this is incredibly useful. So I'm excited that Claude and Claude Code are going to be integrating this. I mean of course because I use it a lot at my, my startup Claude Code, but also I think just broadly for the whole industry we're going to see a lot less bugs. We're going to see hopefully if Claude is doing it, it's kind of setting the standard for the whole market and hopefully we can see more of these other players in the space doing similar things. So excited to see where this kind of goes in the future. Guys, thank you so much for tuning into the podcast. Remember, if you haven't already left a review, I would really, really appreciate a review on the podcast. We are past 150 and I would love to get to 200 reviews before I turn 30 this week. Guys, it's my birthday. If you could leave me a review I would appreciate it. Hope you guys all have a fantastic rest of your day.
