Transcript
A (0:00)
Welcome to the podcast. I'm your host, Jayden Schaefer. Today on this podcast we have some wild stories, one of them being that Mark Zuckerberg, while many people have called him a robot for many years, is officially creating an AI version of himself that's going to take questions at meetings. Apple is reportedly testing four different designs for smart glasses that they're going to launch with AI embedded in them. Vercel CEO went on stage and said that the company is ready to IPO because they AI agents are basically deploying 30% of the apps on their platform. Anthropic temporarily banned the creator of openclaw and the Trump administration is apparently encouraging major banks to test Anthropic Mythos model. There's a ton going on, including all of the drama with Sam Altman's home being attacked by a Molotov cocktail. Oh, my gosh. This is a wild timeline. Let's get into all of it before we do. If you're someone who works with AI regularly and, and you're bouncing between different tools, you should absolutely check out AI Box, my own startup. At AI Box AI, we give you over 70 models in one place. And the part that I think is really underrated is the automation builder. You just describe what you want in plain English, no code, no workflow, no diagrams, and it will build the tool for you. It's 8.99amonth to get access to all 80 of the top models. That's image, audio, video, text. You also get our no code AI app builder. This should save you a ton of money compared to paying for all of these different tools separately. I've been using it and it's becoming one of the things I cannot live without. There's a link in description to AI Box AI. The first story I want to talk about today is Apple testing new diagrams for smart glasses. You know, I'm a huge sucker for smart glasses. I'm not even a huge meta Stan throughout my life, I would say. But I think that their smart glasses are phenomenal. I think a lot of other players in the industry are noticing that. According to Bloomberg's Mark Gurman, Apple is actively testing four different frame designs for smart glasses that they're going to. They're like, their goal is to go sell these in 2027. I mean, obviously this is a good play. They have the technology because they have the Apple V headset, which I honestly forget it exists until I have to go to the Apple Store to get something done with my laptop recently, I had to do that and I saw them on display and people were trying them on. I just feel like those were a really big flop as far as a product goes. Meta might have kind of baited them into the industry, but the glasses are not a flop. I know a ton of people that will actively wear Meta Ray Bans around with AI and cameras embedded in them. And so I think Apple knows that this is a good play. According to Tim Cook, there are two oval or circular options. There's a bunch of different sizes. They're also looking at black, blue, light brown. And I think what's interesting is these aren't gonna have displays, so there's no AR overlays. There's no kind of like mixed reality. Even though they have the VR headset, they're going to basically let you take photos and videos. You can answer calls, you can play music, and you can interact with, of course, their new upgraded AI series. So functionally, I think it's a lot closer to what Meta is doing with their Ray Ban glasses today than anything like the Vision Pro or what Meta is going to be doing in the future. With what Google is working on in the future with augmented reality and glasses, I think this is Apple, you know, basically accepting reality. The Vision Pros did not land the way they hoped, and so they're gonna hopefully move into something that is a lot, has a lot more market appeal and people are a lot more excited about something else that I thought was very fascinating right now. Vercel's CEO is Guillerm Rauch, which, if you don't follow him on X, by the way, he's a legend and drops some. Some really great insights into what's going on with AI. So I highly recommend giving him a follow. He recently said at the company's annual recurring revenue has gone from about $100 million at the start of 2024 to a run rate of $340 million by the end of February this year. And when he was asked about an ipo, he basically said that Vercel is, quote, very much a working public company and that it's, quote ready and getting more ready every day. Personally, I don't think any of this is surprising. In the last two weeks, I think I've spun up five different projects where I've moved them off of other hosting providers and put them on Vercel because it integrates so well with Claude code. And if you ask Claude code, like, where should I host my, you know, Claude code app? It's going to say, well, why don't you try Netlify or Vercel. And Vercel is, you know, a slicker looking company than netlify. So it's, it's a no brainer, right? Like what features do they have? I don't know, but Claude Code recommended it. So I set up five of my websites on Vercel. In all seriousness though, I'm on their free tier. I'll probably get bumped up to their paid tier here soon, but I put so many projects on there. Vercel is an absolute legend and because it has a API integrates with cloud code very easily. I don't go into Vercel and set up anything. I don't know what the inside of it looks like. I don't know how to, you know, point my name servers there. I don't have to do any of that. I just tell Claude, cowork, hey, you know, I'm building an app, go to my domain registrar, pointed at Vercel, get everything set up for me and it does it all. So Vercel is a huge winner in this AI race and it's kind of interesting because obviously Anthropic is coming out of the gate and just killing a bunch of software industries. There's also a bunch of, I feel like their infrastructure, right, like servers and databases, right? You have like Supabase. There's a lot of these companies that I think integrated with AI are just phenomenal companies. And I mean look, 30% of the apps running on Vercel's platform right now came from AI agents, not humans writing code. A hundred percent of the apps that I have on Vercel are coming from agents, not me because there's no way I want to do that. So I think this is definitely the direction that the market's going in. And Vercel has really, you know, I think they've phenomenally captured that. In particular, the next thing I want to talk about speaking of Anthropic is their temporary ban on the creator of OpenClaw. So Peter Steinberger, he created OpenClaw, basically, you know, open source AI coding tool. It went super viral. He recently posted on X and he said that Anthropic had suspended his account for quote, suspicious activity. Basically this happened shortly after Anthropic changed their pricing so that Claude subscriptions no longer covered usage through third party tools like OpenClaw. And of course he created OpenClaw. It was kind of hired or acquired by OpenAI. Users now have to pay separately through the API. So they can't just use the Claude Mac subscription which to be Fair is a subsidized subscription. That's what I have. I don't use OpenClaw because I'm using Claude Cowork and Claude Code kind of combined to get a lot of my projects done. Cloud Cowork is amazing. Openclime sure is amazing too. But Cloud Cowork is amazing because it takes control of your computer, it has computer use. You tell it to do anything. It goes and sets up your Vercel server, your websites, your hosting, your databases. It can go edit videos on capcut. I mean, honestly, it's insane. So the problem with it though is that if you're on the $200 a month tier with Cloud Cowork, you're really using thousands of dollars of credits and they're just kind of subsidizing it right now for early users and they don't want to subsidize people that are using opencloth. So honestly, I know they actually get a lot of flack for this because it's like, oh, they're fighting against open, you know, source and by saying that ever has to use like their API to run it on open cloth, I mean it's subsidized. So at the end of the day it's, it's what's kind of more profitable for their company. So I don't know, I'm not too, too mad about it. But in any case, I think one thing that's really interesting from this is kind of the tension that all of this showed with open source tools, proprietary platforms. A lot of people got quite mad about that. Obviously it was, I mean, it's kind of a new policy that they changed and he got, you know, he got quote unquote suspicious activity for how he was using it, which was against their new terms. But anyways, posting on X, a lot of people got quite upset about it. The next thing I want to talk about is that there is the Trump officials are apparently encouraging banks to test Anthropic's Mythos model. This is a model that went super viral because it's like, you know, it can go and hack like all of this different software. We've recently had a lot of reports come out that say, you know, Anthropic saying it was too dangerous to release was probably a little bit overblown and probably like a big publicity stunt. But apparently Treasurer, Secretary Scott Bessett and Federal Reserve Chair Jerome Powell brought banks and executives in for a meeting this week and they encouraged them to use this new model to detect security vulnerabilities. JPMorgan Chase was already listed as one of the original partners from Anthropic when they kind of announced this. But now Goldman Sachs, Citigroup, bank of America, Morgan Stanley, all of them are going to be testing it out, too. What I think is interesting is that Anthropic is currently in a court battle with the Trump administration. And the Department of Defense has designated Anthropic as a supply chain risk after some of their negotiations broke down over limiting military use of the model. Anthropic put some terms in saying, hey, look, the military can't do these things with it. The military wasn't happy about that. So they're having this kind of legal battle and at the same time, when it comes to patching security vulnerabilities, like, well, we'll, we'll put our little legal battle to the side, make sure to use their latest and greatest model for, for the banking system, which, honestly, I mean, I'm, I am, as a user of, of the banking system, I'm happy that this is happening, I think. It's not just a US Thing, though. The Financial Times reported that the UK financial regulators are also discussing the risks posed by Mythos. They're talking about it from a different angle, though. They're concerned about what it means, that the model might be good at finding vulnerabilities that Anthropic won't release, and that's why Anthropic isn't releasing it publicly. So definitely a very complicated relationship between some cybersecurity organizations, governments, and these new models that keep coming up. Okay, the wildest story I think that happened in the last. The last few hours is that Sam Altman has posted publicly. He responded to the New Yorker investigation. There was an attack on his home, which I think is absolutely wild. Definitely very unsettling. Early on Friday morning, this kind of someone came in through a Molotov cocktail at Sam Altman's home in San Francisco. Nobody was hurt. The suspect was later arrested at OpenAI headquarters where he was threatening to burn the building down. So obviously a mentally unstable person. This came just a couple of days after the New Yorker published a really long investigation by Ronan Farrow and Andrew Martins, two very serious journalists. Farrow won a Pulitzer for his Harvey Weinstein reporting. And they basically interviewed more than 100 people with knowledge of Sam Altman's business context and the picture that they painted. I mean, I saw a bunch of clips of them kind of talking about it on, like, podcasts and social media, and it was definitely very unflattering. Most sources describe Sam Altman as having what they called, quote, a relentless will to power that set him apart from other, like tech CEOs. One anonymous board member from OpenAI was actually quoted and they said that he combines, quote, unquote, a strong desire to please people, to be liked in any given interaction with a sociopathic lack of concern for the consequences that might come from deceiving someone. So, I mean, obviously this is very heavy. I think it's pretty consistent with reporting from other journalists who've also profiled Sam Altman over the years. There's definitely this kind of pattern of questions about whether the public Persona matches how he's behaving in private behind closed doors with his board. I mean, a lot of the stuff that I saw specifically was him kind of going to the board and like just flat out lying about things. Sam Altman published a blog post Friday night responding to both the attack, you know, someone attacking his home and the article. He acknowledged that he had made mistakes, particularly around being, quote, conflict averse. I mean, you know, he's basically saying sometimes he will be conflict averse and lie to avoid conflict. I think he said that that's not great. He specifically referenced the 2023 board drama, which is, you know, when he got removed and then reinst as CEO, he called it something that he's not proud of handling of how he handled it. He said he handled it badly. He said that he's, quote, a flawed person in the center of an exceptionally complex situation, trying to get a little better each year. It's not getting better every day, but he's getting better every year. He also drew a really direct line between the article and the attack. In his, like, blog he said that someone had warned him that this article was going to come out and that, you know, they're like, look, it's going to be super dangerous for you because a lot of people are mad about AI and now there's this, you're going to be in danger. And apparently he was like, no way, like, I'm going to be fine. And then he was like, he said, quote, he had underestimated the power of words and narratives. In his post he also talked about what he called the ring of power dynamics in the AI industry. Basically this kind of idea that the prospect of somebody controlling AGI makes people do really extreme things. And then he kind of made the point that the solution isn't for any one person or company to hold that power, but to, quote, orient towards sharing the technology with people broadly, which obviously is a very interesting Statement from the CEO of a company that was previously an open source nonprofit and has since become a closed source for profit. Very valuable, you know, one of the most valuable private companies in the entire world. The most valuable private company in the entire world. So I think there's a few things that happened here simultaneously. First, obviously, the attack itself is super disturbing. I mean, whatever your opinion of Sam Altman or OpenAI, obviously no one should have their home firebombed. That's super messed up. And it reminds me of like it was probably even a more extreme level. But I remember when Elon was getting involved in politics and arsons were going and throwing Molotov cocktails inside of like Tesla, you know, stores and burning them down and people are there and it's like, obviously this is very messed up. It's, it's vigilante, it's, you know, it's anarchist, it's illegal and dangerous and like terrible. So I think that's on the one side, but I also think it highlights something that we're going to see more of as AI becomes kind of this bigger political and cultural flashpoint. You know, people building these systems are becoming targets in a way that they weren't before. And I think a lot of the rhetoric around AI is getting hotter. Also. You can kind of look at the New York Times piece. I've looked through it a lot. You know, whether you come away thinking Sam Altman is fundamentally untrustworthy or just kind of a complicated person doing a hard job, that probably depends on your opinion. But I think just the sheer number of people willing to go on kind of background with critical accounts is pretty notable. There's a hundred plus sources in this reporting. So like, there's a lot of people that were interviewed for this. And the other thing that I think is interesting is Sam Allman's response was more kind of personal and reflective than what we usually get from tech CEOs that are kind of in this like crisis mode. He didn't lawyer up kind of the language or like deny anything. He basically admitted that he was flawed. He kind of apologized to people that he's hurt. He acknowledged the quote, Shakespearean drama in like the industry right now. So I think whether that reads is kind of being genuine and like how he's self reflecting or maybe more. It's kind of like a calculated PR thing. That's up to you. But I do think it's worth kind of taking it at face value and seeing what comes next. Something about this whole environment, you know, in this moment, we're in. I think anxiety is real. The stakes of who controls these systems is really enormous. And I think the public figures in the space are under a lot more scrutiny than ever. Right. There's kind of a combination of that New Yorker profile, there's the attack, there's Sam Altman's kind of response. Feeling like a bit of a turning point on how the public relates to people building AI. I think it's not, you know, just kind of this technology story anymore. It's kind of becoming this more personal and also political. So anyways, a lot going on and I don't think that's going to ease up anytime soon. Okay, everyone, that is the show for today. If you're getting value from these episodes, I would really, really appreciate it if you could leave a review, drop a comment on Apple Podcasts, or you can leave some stars on Spotify. It really like, honestly, it helps us show so much. And if you want to check out AI box, head over to AIBox. AI the link is in the description. I'll catch you in the next episode.
