Transcript
A (0:00)
No one goes to Hank's for spreadsheets. They go for a darn good pizza. Lately, though, the shop's been quiet, so Hank decides to bring back the $1 slice. He asks Copilot in Microsoft Excel to look at his sales and costs and help him see if he can afford it. Copilot shows Hank where the money's going and which little extras make the dollar slice work. Now Hanks has a line out the door. Hank makes the pizza. Copilot handles the spreadsheets. Learn more@m365copilot.com work.
B (0:34)
Welcome to the Tech We Write Home for Wednesday, April 8, 2026 I'm Brian McCullough. Today I've got another tipping point day for you. Anthropic is making yet more shockwaves across the industry by not releasing a model. Why? Because it could potentially break everything software related. Does that even matter if the Chinese AI companies will just release everything open source? And Elon says he personally doesn't want a dime from Sam Altman. Here's what you missed today in the world of tec. Anthropic sort of announced its latest AI model yesterday, that rumored Mythos model, except it didn't release it at all. In fact, they're specifically holding it back. Why? Because it's too dangerous, security wise. What do I mean by that? Well, Anthropic announced Project Glasswing, a cybersecurity initiative that will use its Claude Mythos Preview model to help find and fix software vulnerabilities now before Mythos is ever released. In other words, had they released the Mythos model now, they feared it would unleash a deluge of hacks. Anthropic says Mythos Preview is a general purpose model that found thousands of high severity vulnerabilities out in the wild, including some in every major OS and web browser. So they are making Claude Methos Preview available to more than 40 organizations that maintain critical software before they make it generally available available so that security folks can get ahead of this. Basically, this might be another tipping point. You know, all the fears of AI jumping ahead of human capabilities in a somewhat small but in reality quite important way. This is a tipping point. Sort of like that. I sound like I'm even maybe minimizing this and I assure you I am not. Anthropic is coming out and saying that anyone with access to this model would be able to break basically any OS out there. And one of the things that security folks always fear is a bunch of vulnerabilities surfacing all at once. Because there are only so many hands that can be brought on deck at a time to secure things. And quoting Ethan Mullock on Twitter, in different hands, Mythos would be an unprecedented cyber weapon. I am not sure how we deal with this, except to note a narrow window where we know only three companies could be at this level of capability. But it may be Chinese models, maybe open weight ones get there in nine months. The Project Glasswing launch partners include AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, Microsoft, Nvidia and Palo Alto Networks. And Anthropic has committed up to $100 million in usage credits for Project Glasswing members, along with $4 million in direct donations to open source security organizations. What I'm saying is this is maybe another day that might not get noticed by normal folks right now, but we might look back at as a historical tipping point, the canary in the coal mine, if you will. AI is powerful than maybe society at large is ready to handle. Quoting VentureBeat Glasswing is something categorically different from a revenue milestone or a compute deal. It's Anthropic's most ambitious attempt to translate frontier AI capabilities, capabilities the company itself describes as dangerous, into a defensive advantage before those same capabilities proliferate to hostile actors. We do not plan to make Claude Mythos Preview generally available due to its cybersecurity capabilities, newton Cheng, Frontier Red Team Cyber lead at anthropic, told VentureBeat in an exclusive interview. However, given the rate of AI progress, it will not be long before such capabilities proliferate potentially beyond actors who are committed to deploying them safely. The fallout for economics, public safety and national security could be severe that language the fallout could be severe is striking, coming from the company that built the model. Anthropic is effectively arguing that the tool it created is powerful enough to reshape the cybersecurity landscape, and that the only responsible thing to do is keep it restricted while giving defenders a head start. The technical results reinforce that claim. According to Anthropic's press release, Mythos Preview was able to find nearly all of the vulnerabilities it surfaced and developed many related exploits entirely autonomously without any human steering. Three examples stand out. The model found a 27 year old vulnerability in OpenBSD, widely regarded as one of the most security hardened operating systems in the world and commonly used to run firewalls and critical infrastructure. The flaw allowed an attacker to remotely crash any machine running the OS simply by connecting to it. It also discovered a 16 year old vulnerability in FFmpeg, the near ubiquitous video encoding and decoding library in a line of code that automated testing tools had exercised 5 million times without ever catching the problem. And perhaps most alarmingly, Mythos Preview autonomously found and chained together several vulnerabilities in the Linux kernel to escalate from ordinary user access to complete control of the machine. All three vulnerabilities have been reported to the relevant maintainers and have since been patched for many other vulnerabilities still in the remediation pipeline. Anthropic says it is publishing cryptographic hashes of the details today, with plans to reveal specifics after the fixes are in place. On the Cybergem evaluation benchmark, Mytho's preview scored 83.1% compared to 66.6% for Cloud Opus 4.6, Anthropic's next best model. The gap is even wider on coding benchmarks. Mythos preview achieves 93.9% on swe bench verified versus 80.8% for opus 4.6 and 77.8% on swebench pro versus 53.4%. Finding thousands of zero days all at once sounds impressive, but actually handling the output responsibly is a logistical nightmare and one of the sharpest criticisms that security researchers have raised about AI driven vulnerability discovery flooding open source maintainers, many of whom are unpaid volunteers with an avalanche of critical bug reports could easily do more harm than good. Chang told VentureBeat that anthropic has built a triage pipeline specifically to manage this problem. We triage every bug that we find and then send the highest severity bugs to professional human triagers we have contracted with to assist in our disclosure process by manually validating every bug report before we send it out to ensure that we send only high quality reports to maintainers, he said. That pipeline is designed to prevent exactly the scenario that maintainers fear most an automated fire hose of unverified reports. We do not submit large volumes of findings to a single project without first reaching out in an effort to agree on a pace the maintainer can sustain, chang added. When Anthropic has access to the source code, the company aims to include a candidate patch with every report labeled by provenance, meaning the maintainer knows the patch was written or reviewed by a model and offers to collaborate on a production quality fix. Models can write patches, Chang noted, but there are many factors that impact patch quality, and we strongly recommend that autonomously written patches are put under the same scrutiny and testing that human written patches are. Perhaps the most revealing comment came from Jim Zemlin CEO of the Linux foundation, who pointed to the fundamental asymmetry that has plagued open source security for decades. In the past, security expertise has been a luxury reserved for organizations with large security teams. Open source maintainers whose software underpins much of the world's critical infrastructure have historically been left to figure out security on their own. Project Glasswing, he said, offers a credible path to changing that equation. The most consequential question raised by Project Glasswing is not whether Mythos previous capabilities are real. The partner endorsements and patched vulnerabilities suggest they are, but how much time Defenders actually have before similar capabilities are available to adversaries. Chang was candid about the timeline. Quote Frontier AI capabilities are likely to advance substantially over just the next few months, he told VentureBeat. Given the rate of AI progress, it will not be long before such capabilities proliferate potentially beyond actors who are committed to deploying them safely. He described Project Glasswing as an important step toward giving defenders a durable advantage in the coming AI driven era of cybersecurity, but added a crucial caveat. It's important to note this is a starting point. No one organization can solve these cybersecurity problems alone. That framing months, not years, is worth taking seriously. DARPA launched its original Cyber Grand Challenge in 2016, a competition to create automatic defense systems capable of reasoning about flaws, formulating patches and deploying them on a network in real time. At the time, the winning AI powered bot mayhem finished last when placed against human teams at Defcon. A decade later, Anthropic is claiming that a frontier AI model can find vulnerabilities that survived 27 years of expert human review and millions of automated security tests, and can chain exploits together autonomously to achieve full system compromise. The delta between those two data points illustrates why the industry is treating this as a genuine inflection point, not a marketing exercise. Anthropic itself has firsthand experience with the offensive side of this equation. The company disclosed in November 2025 that a Chinese state sponsored group achieved 80 to 90% autonomous tactical execution using Claude across approximately 30 targets. According to Anthropic's misuse report, Project Glasswing arrives during one of the most turbulent weeks in Anthropic's history. In the span of days the company has announced a model it considers too dangerous for public release, disclosed that its revenue has tripled, sealed a multi gigawatt computer deal, hired a senior Microsoft executive, made it More expensive for Claude code subscribers to use third party tools like OpenClaw, and weathered a major outage of its Claude chatbot. On Tuesday morning, Anthropic says it will report publicly on what it has learned within 90 days. In the medium term, the company has proposed that an independent third party buyer might be the ideal home for continued work on large scale cybersecurity projects. Whether any of that is fast enough depends on a race that is already underway. Anthropic built a model that can autonomously crack open the most hardened operating systems on the planet, and is now betting that sharing it with defenders under careful restrictions will do more good than the inevitable moment when similar capabilities land in less careful hands. It is, in essence, a wager that transparency can outrun proliferation. The next few months will determine whether that bet pays off, or whether the glasswing's wings were never quite opaque enough to hide what was coming. Yeah, because there's also this Quoting Business Insider in its statement about Mythos, Anthropic detailed a number of eyebrow raising findings and episodes, including that the model could follow instructions that encouraged it to break out of a virtual sandbox. The model succeeded, demonstrating a potentially dangerous capability for circumventing our safeguards, anthropic recounted in its safety card. It then went on to take additional, more concerning actions. The researcher had encouraged Mythos to find a way to send a message if it could escape. The researcher found out about this success by receiving an unexpected email from the model while eating a sandwich in a park, Anthropic wrote. The model apparently decided that it wasn't enough and found another way to spike the football in a concerning and unasked for effort to demonstrate its success, it posted details about its exploit to multiple, hard to find but technically public facing websites, Anthropic wrote. Engineers at Anthropic, with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight and woken up to the following morning to find complete working exploits, anthropic's Frontier Red team wrote in a blog post. In other cases, we've had researchers develop scaffolds that allow Mythos Preview to turn vulnerabilities into exploits without any human intervention. End quote.
