
SED News is a monthly podcast from Software Engineering Daily where hosts Gregor Vand and Sean Falconer unpack the biggest stories shaping software engineering, Silicon Valley, and the broader tech industry. In this episode,
Loading summary
A
Hello and welcome to SED News. As I'm sure some of you know already, this is a different format of Software Engineering Daily where Sean and I. I've got Sean with me, I should say. Say hi, Sean, as usual.
B
Hey, Gregor. Hey everyone.
A
Hey. Often forget to let Sean say hello. Yes, we are here. Slightly different format where we touch on some of the main headlines in tech. We then go into a bigger topic in the middle and then we take a fun spin look at Hacker news highlights that Sean and I have picked up over the last couple of weeks. So as we often do though, bit of a catch up. I think Sean and I have both been wrapped up in conferences over the last couple weeks, so. Yeah, exactly, yeah. So where have you been and how is it going?
B
So I was in Las Vegas recently for Cloud Next, which was fun. I'd never actually been to Cloud Next, even though I worked at Google, I worked in Google Cloud and I've partnered with them a number of times but for whatever reason I just never ended up at Cloud Next. But it was good. And then I was actually supposed to be in India this week, but thankfully that trip got postponed because I would have been back to. Back to back because I leave for Boston for IBM think Sunday night. So there's kind of a lot going on. We're thick into the spring event season anyway with like May, June and so forth. So you're in San Francisco, where I live. So how are things going? How are you enjoying it so far?
A
Yeah, absolutely. No, it's nice to be back. I think last time I was in SF was end of 2024 actually. So yeah, been a while in tech terms, but really nice to be back. Always like the weather here. Nice change from Singapore. I was at Stripe sessions, which has grown enough. So it's in Moscone west, which is like the sort of second largest, I think probably venue in sf. So yeah, really awesome production. I think just to call out one little detail. I'm not sure if people watch the Stripe podcast. I'm sure many of you do. It's called Cheeky Pint and it has John or Patrick Collison sitting in a mock Irish pub having a pint of Guinness with a guest. And they had actually put together a full mock up of that set in the venue and people could go in and get free pints of. Free little mini pints of Guinness. I just thought, again, in a mock Irish pub. I thought that was an amazing detail.
B
Yeah, that's cool.
A
Yeah, very cool. But yeah, it's exhausting as I think, you know, Sean, going to conferences, I was on the Supabase booth for quite a few hours each day. So by getting to talk to an amazing bunch of people, some of them SE Daily listeners as well, which is always fun, get to meet them in person.
B
I was gonna mention that at Cloud next, I met one of the fans of the show that works for IBM and came up and let me know that they listen. But one of the things I wanted to share with you was that they said, I don't listen to every episode anymore, but I always make it a point to listen to the SED news episodes that they really.
A
Oh, I like that.
B
Yeah, so it's always nice. You never know, like we're speaking out to the ether. You don't always get the feedback. So it's always nice to hear people are finding these shows.
A
That's awesome. That's really great to hear that. So, yeah, if anyone is listening, it was great to meet you. And equally I met a whole bunch of people that maybe weren't listeners or are. I didn't ask. I didn't make it known to every single person that I was that random voice on SE Daily. But yeah, I just had some amazing conversations. Really smart people in the Bay Area, they come from all over the world and they congregate here. And I think that's amazing. And so obviously recording this from my hotel room in a nice makeshift setup of the microphone that I use normally, but it's propped up in an ice bucket, but it seems to be working great. Well, moving to the headlines. So we've got a few things to touch on. Especially with the last two weeks, there's been quite a few meaty headlines. So these are not from like say the last 24 hours. We wanted to dig up some things from the last couple weeks that given that it is conference season. I don't know about you, Sean, but I basically, I can't focus on anything else when I'm at a conference. And so I end up missing literally all the news for a week anyway. So this is quite good to catch up.
B
That's not why you need an AI agent processing your newsfeed at all times.
A
Definitely need a post conference agent to basically just pick up all the pieces of my life that have gone on hold for the last three, four days. But moving into the headlines, the first one, there will be a couple of security ones here. There are quite a few security headlines actually touching across different areas. So first one is Mythos. So this is a large model that was released by I say released, we'll get to the release part in a second. But released by Anthropic basically saying that this was a security focused model that could effectively exploit virtually any system, especially legacy systems. These really deep seated, very, very critical bugs that have probably been sitting in legacy software for almost decades and suddenly these can be almost zero days. And obviously these legacy systems do sit in some pretty important places. But yeah, effectively Mythos they say autonomously discover previously unknown vulnerabilities in every major operating system and browser. It can carry out multi step cyber attacks like that humans would take days if not weeks. I think a 27 year old flaw in OpenBSD was one of their standout that they were talking about. But yeah, let's talk about the rollout of that. They're saying that it's going to be very controlled because obviously the power of this model, especially in the wrong hands would be pretty terrible. So I think they're saying that they're only releasing it to major tech and financial firms. I think they mentioned Amazon, Apple, Microsoft, JP Morgan Chase. So they're calling this project Glasswing. So the idea being patch critical vulnerabilities before the bad actors, now that they're aware of this can go exploit them. It's a funny one. So it's almost chicken and egg. Do you release the model or do you keep it back? And I think there's maybe also a bit of someone analogized it to luxury fashion where like it's a Birkin bag of AI models.
B
Right.
A
It's like scarcity demand.
B
I don't know that it maybe it's part of some grand launch strategy marketing ploy. But even if it's not like if you tell everyone it's too powerful to make it available, it's terrifying. And then when you let the biggest companies use it, it ends up driving I think a lot of demand because people want the thing that they can't have a lot of times.
A
Yeah, absolutely. We touched on last month ICT news where politics comes in to especially the two big players, Anthropic and OpenAI and it comes in here as well. Anthropic is still feuding if you like, with the Pentagon over refusing to let it's just normally available models be used for autonomous weapons and surveillance. So do they then release that to the U.S. government? Because it would seem a bit strange that okay, this crazy powerful security model can be released to someone like Amazon, but it cannot be released to the US government. I mean that seems strange, but if you just forget who the president of The United States is for a second. Yeah. So I don't know what you thought of that, Sean.
B
Yeah, I don't know. I haven't heard as much about the political story around Anthropic the last few weeks. It's like maybe there's just so many things going on that it's been a little bit buried in my newsfeed. But I think certainly like historically that would seem a little bit strange. It's like, hey, we trust these large corporations, but we don't trust the US Government with this or trust our own government with this. But I don't know all the ins and outs there, but I think Anthropic has been pretty good at having a certain philosophy about controls around models, even delaying launches of models because they haven't met their security bar and so forth or their certain guardrail expectations. And they stood behind this philosophy as a company like multiple times. So just following their track record, the assumption that I would make here is like their intention is still true to that vision, even though I think as an outsider it does look funny because like we said, there's all this analogies around fashion and if we tell everybody also that it's too dangerous to use and that all it does is make people want it even more. So what is the ultimate goal and intention behind this? I mean, it seems primarily the good intention glass half full perspective is like, hey, we created this thing before some bad actor might create it and we're going to give it to key players so that they could patch things ahead of someone being able to exploit it.
A
Yeah, absolutely. And there was rumblings that a contractor, it was sort of all unnamed, a contractor of one of the companies that had been given access had then passed this on to effectively the dark web and access to this model was now available. Anthropic, I think, absolutely refuted this and said that they had not seen any evidence whatsoever that people that were not supposed to have access were indeed accessing it. But yeah, there's always going to be motives for someone to saying that they do in fact have access to this. For example, just simply charging someone money and then running away. So we don't know, but these very unverified claims are running around as well. Yeah, we haven't seen the Skyfall yet, basically.
B
Yeah, we haven't seen the sky fall yet. I mean, that's a good measurement. It's like it's suddenly all these major websites going down because someone has access to this, is exploiting them. I do think that the finding of like the 27 year old flaw in OpenBSD is pretty staggering. You just think about the amount of security engineers and engineers that have looked at that over the years and then only to have AI model kind of figure it out. It's a humbling moment for humanity. It's like the Garry Kasparov losing to IBM Deep Blue or the work that DeepMind did to beat the world's GO champions. And now it's like, okay, well now we have a model that can find flaws and open source that essentially thousands of really, really gifted engineers have looked at and handcrafted and kind of been blind to it.
A
That's a really good point. And it's also, I guess that so much of this legacy software, while engineers, no matter how smart they are, they're just not going back and combing over all the code that was written 27 years ago.
B
Yeah, you probably assume it works, right?
A
You assume it works. And unfortunately that also means, yeah, you assume if something was a problem, especially after 27 years, it would have been discovered. And obviously that clearly not the case. So yeah, good call out. We'll see how this develops. How access is through this project. Glasswing, how is that rolled out? Who else gets access? I think that'll be interesting once you move away from say the hyperscalers. Like who actually is supposedly allowed to use this model. So yeah, we'll see how that goes. Moving on to this was the Context AI breach, which one of the main recipients of bad news on that front was Vercell. Effectively. So effectively what happened here was an attack change started when Context AI an employee there was infected with Luma Stealer malware after downloading what they thought to be Roblox game cheats. So I mean, this is why I just think that people that are on the other side of the coin that wish to do malicious things, they do think of pretty ingenious ways to get people to install things. This could have been for that person's child, for example, who knows? But anyway, this harvested credentials included things like Google Workspace and like Datadog and that kind of thing. And then the attacker used compromised OAuth tokens from context AI and then managed to pivot into a Vercel employees Google Workspace account and then into Vercel's internal systems. And yeah, this certainly lit up on our screens at Supabase because I think a lot of people know that our dashboard that all our users use, it's actually on Vercel. So we suddenly jumped into action and had to do a whole ton of Credential rotations. If I just looked at all the steps, all the list of things had to be done, that was like a good chunk of our front end team's day just combing through that. So that's why it's great to have a team that can just jump in and do that. But this must have affected tons and tons and tons of people. So it's just.
B
Oh yeah, I think it's like the classic human hack of hey, let's dangle something out that somebody might want like Roblox game cheats or you go back to the early 2000s, what you had the like Anacorta Cornucova virus where it was like supposedly pictures of this tennis player that people were attracted to and then inevitably some subset of people are going to download or click on the
A
thing and then that's far too obvious now. You definitely couldn't get away with just photos of somebody. Yeah,
B
yeah. I think the thing here you see a lot of times in these reports is as a consequence like Versal is now defaulting new environment variables to be classified sensitive. So they're encrypted at rest automatically. And it's kind of, I think in all these circumstances and you worked at security for a long time, it's like, why wasn't that the default from the start? And this is often the case in these kind of exploits is like we saw this as well with Snowflake when it was like a contractor got access to like an account that they had access to through Snowflake and that didn't require two factor authentication and a bunch of other stuff and then that allowed them to get access to some subset of data. And it wasn't necessarily that Snowflake was explicitly vulnerable, but this person actually had proper access. But there was no Two factor authentication on by default. And then the reaction from Snowflake was to make it so that two factor authentication becomes the default and that was forced on everybody. It's like, well we could have had that in there in the first place. So. And this comes up over and over again and simply I think companies consistently fail to make secure by default the actual default. It's kind of a simple concept but it's missed over and over again. I'm not sure why that is and perhaps there's just like not enough of a financial reason to tell that financial reason is a news headline or it's just something that we tend to miss.
A
Yeah, I think certainly now if I. Yeah. Thinking I guess just from the pure security perspective, but security I've often said security is just sort of flaws in how humans operate, whether it's fail to protect or that kind of thing. I wonder at Vercel for example, was this on the roadmap and it was going to come in two months. It's that kind of thing. Like when is the moment that it was too late to implement this And I'm sure this was not a small lift for them to need to implement it. So they've probably done, I can imagine it is sitting on a roadmap somewhere on their side and it was just, maybe they were just going through all the checks and balances of okay, what's it actually going to take to do this and make sure that no customers are affected? Downtime, etc. Downtime I think is usually probably the thing that gets in the way of okay, how are we going to do this in a zero downtime manner? And then bang, you get hit with who could have predicted that somebody three arm's length away would download some Roblox fake cheats and then that's what leads through these supposed layers that you thought you had. I guess the advice there is if you're very aware of some major thing that should be especially encrypted across your platform, basically you should just drop everything and move on to that.
B
Yeah, I mean it could be like in these roadmap conversations I think that sometimes it's easy for teams to potentially put punt on some of the security features because they're not necessarily revenue generating features. And that's what comes down is like, oh, we invest time and resources in this thing where we know it's going to drive revenue and it's very unlikely that we have this security exploit. We can deal with this later and then it can gets kicked down the road over and over again. I don't know necessarily the story of Vercel, but I just think having been part of product organizations and some of that decisioning in the story that we consistently see at these companies that do get exploited, it seems to be the case that it's kind of always fairly easy to make the argument that like, oh, we could deal with this later until it becomes a thing.
A
Yeah, and this has happened before as some listeners will know what I'm about to say. But the kicker here is that again, Delve was the compliance company that had issued certifications for context, AI Delve positioned itself like a Vanta. The only slightly minor detail there is that they were forging a lot of their compliance certificates, unlike Vanta. And after we did that episode. I think it was last month where we looked at Delve. I did look into the whistleblower nature of it and there's a whole website where someone's like documented everything and calls that the founders have gone on defending saying no, no, almost like in a very, I would say dismissive way, you know, like don't be ridiculous. No one, this is just complete false. And then suddenly it's just all very clear that this is real, that they are forging everything. So it could be that somebody on that whistleblower side or just somebody a bit of a vigilante is thinking, well we're going to show Delve like how bad this could get. We were just going to target companies that were using Delve. I'm not saying that that's been confirmed by any means. It just seems interesting that we've now seen two major breaches over the last, I guess two months, both of which Delve were the compliance people.
B
Yeah, I mean it's a small sample size so I think I'll hold off on my tinfoil hat calling.
A
I think that's a good take conspiracy
B
theory for a little bit. But if we're five months from now and every month we've reported another Delve related data breach, then I will join your circle of conspiracy theorists.
A
Yes, well, you never know. Keep tallies and see how we net out at the end of the year. GRGA's Delve Conspiracy Count
C
Most AI frameworks started with voice and bolted on video as an afterthought. Vision Agents by Stream was built video first from day one. It's an open source Python framework that lets you build real time voice and video AI agents in minutes, not months. With 25 integrations for models like OpenAI, Gemini and Claude sub 500 millisecond latency on Stream's global edge network and support for YOLO, roboflow and custom CV models, you get a production ready stack without the infrastructure headache. Whether you're building coaching tools, multimodal assistants or real time security pipelines, Vision Agents handles the hard parts. Get started free at VisionAgents. AI in mobile application security Good enough is a risk. Guard Square uses advanced multi layered code hardening techniques and automated runtime application self protection and mobile application security testing combined with real time threat monitoring to deliver the highest level of mobile app security. Discover how Guardsquare brings all these together to provide mobile app security for your Android and iOS apps for without compromise at www.guardsquare.com. you know fidelity is a financial services leader, but did you know that inside Fidelity is a community of technologists working together to shape the future of finance and tech. Fidelity is always investing in tomorrow. From emerging tech to cutting edge tools that will transform what comes next. Their technologists are encouraged to keep learning so they can expand their skill sets, explore new ground and stay ahead of this rapidly evolving industry. And right now, Fidelity is hiring technologists to join their team. Fidelity technologists get the best of both worlds. Startup energy that's grounded in the stability of a financial institution. That means support, resources and amazing benefits. Bring your skills to a culture where you're empowered to dream big and build the tech that drives an organization and makes a real impact on people's lives. Find out more@tech.fidelitycareers.com that's tech.fidelitycareers.com Fidelity is an equal opportunity employer.
A
So moving away from security, these were more of a macro headline that again has touched especially like financial news layoffs, unfortunately, again. And so the two standout companies were Snap and Meta. What's interesting, I guess is more the communication around the why. I think commentators are always digging in on the whys. And the whys are usually derived from especially these public companies. It's from the public statements that they put out, maybe as part of earnings calls and this kind of thing. So Snap is famously unprofitable. So Evan Spiegel said that they were hitting this critical moment where they really have to do something to make the company a profitable one and doubling down on AI and the respects product. And that led to a thousand job cuts, which apparently is roughly 16% of the workforce. They basically had a lot of pressure from investors to start massively improving their financial performance. And they do look a bit strange, quite frankly, next to a lot of their, when I say peers, companies that might have ipoed around the same time, Meta being one of them. But we'll come back to Meta in a second. But yeah, not being profitable in this era, I guess for the age of the company. Yeah, I can see why they were getting some heat from investors. Actually. It's not to say, I mean, I always feel, I can very publicly say I very much feel for anyone that's been affected by these layoffs. It's a horrible situation. But I guess from a logical standpoint, it might make sense that at least investors were getting a bit unhappy with the Snap performance.
B
Yeah, I mean, I think being a public company in the public market right now is a tough spot to be in a lot of ways. And then on top of that, from an employee perspective, some of these Companies when they announce layoffs, they see stock bumps. So the stock goes up after saying, hey, we're laying people off and we're focused on efficiency. And that creates, I think, a certain cascading effect where when maybe the stock is not doing well, then the investors are putting pressure on the company to make a change. And if we're sort of rewarding companies for downsizing, at least from like a stock valuation perspective, then that's the easy decision by a company to make because it's going to positively reflect in the stock. And then on top of that, I think with social media companies in particular, there's a lot of, in terms of historically human capital deployed to like review and curate content. There's just a lot of process there. Like I remember going to Hyderabad when I was at Google in India and there's a huge amount of people that work there to review all the like YouTube videos and make sure that someone's not putting something up. That's awful. So there's a lot of people to do that. And I think all these social media platforms have some version of that. But if they can offload some of that to using AI to do it, then you can do a lot of this stuff in a more efficient way. Where you're taking out some of the tasks that historically have required a lot of human energy around it is one thing. But I do wonder, broadly speaking, about the future of social media. We have social media companies that are laying off humans to build AI that would generate content that are used to be made by humans on their platforms. The feed is increasingly AI generated content. It's served by AI curated algorithms. The humans whose attention is then sold to by AI optimized ad systems at some point. Who's this for an AI circle of content to optimization, to serving ads. And it's kind of like the human element maybe is the people who are passively consuming this stuff, but it's all AI generated content. What are you signing up for as a user?
A
Yeah, absolutely. And I mean we should bring Meta into this as well, being effectively the most popular across a couple of platforms, probably social media company in the world, excluding X, I guess. But Snap said, just to compare and contrast. Snap said we're doing this. They didn't exactly say to double down. Well, they said they're going to double down on AI and they're going to invest a lot more in this moonshot of the specs product. And we've talked a lot about that idea in past episodes. Meta, however, saying they're actually doing it and being almost, I would say more quote honest and saying like we are doing this to offset the investments that we're making. I mean that's a very polite way of saying we're losing money over our AI infrastructure bets, all this stuff. We need to do something to shore up that and continue to be profitable, to look profitable. So they just said so they're cutting 10% of its workforce, which is much larger in comparison. So I believe it amounts to about 8,000 employees and also not higher, 6,000 roles that were open. I mean I can't even fathom that A company has 6,000 open roles, but that's just something I can't get my head around. But yeah, I hate to say I prefer this one, but I do appreciate that they're at least being clear that we're doing it to offset the fact that we're losing money somewhere else, which is I think what everybody's been trying to get some of the companies to just admit, which is are you actually making any money from this capex that you're investing in?
B
Yeah, I mean, I think we're in a place where the world's changing very quickly and I think a lot of companies feel the existential threat of what the future might look like. So they have to make fairly big bets in investments to survive this digital transformation that's happening, this paradigm shift around AI and you want to be part of the sort of winning side of that which takes investment in a big bet. It's hard to do that as a public company because you're under such scrutiny and people are ultimately looking at how much money you're making, the profitability, the bottom line, while you're also trying to do these innovation bets. And some companies have the luxury of like a really healthy revenue generating business like a Google for example, where they can spin off innovation arms that are well funded and it doesn't deteriorate essentially their core business. But for other companies, especially smaller public companies, that's hard to do. If you have like your core business and then you're also trying to like change as a business, like how do you fund the innovation while also protecting and growing the core business? That's a difficult balance to make. So if you want to survive the existential threat, something has to give.
A
Yeah, exactly. So again, obviously very sorry to hear anyone potentially listening if you're caught up in this. Unfortunately it looks like this was nothing really to do with anyone's skills, etc. It was just a purely financial thing that is unfortunately seems to be part of the landscape as of the last five years with big tech. So, just very briefly before we move to our main topic, this dropped, I believe today, actually, which was that CO2, which is a huge investor, they've actually got a plan to buy up land for data centers. And the question is why? And I think people are speculating that this is actually for Anthropic. So it's kind of interesting that rather than just invest more money in Anthropic, they've actually just gone straight to the infrastructure themselves or the base of it and just said, well, maybe we'll just buy land and then give that or lease that, I guess, to one of our investees. So that's an interesting strategy there.
B
You know, a lot of companies are trying to sort of pivot their way into being AI companies. Maybe they need to pivot their way into being real estate companies and just like own the land that companies need to build their data centers on.
A
Yeah. And similarly, I think it was yesterday, it's been sort of rumored that Anthropic are going to be doing one more funding round, possibly a 9 billion valuation. Sorry, 900 billion valuation. Sorry, not. Yeah, no, clearly it's the end of the week. My brain can't remember 9 versus 900. And this is again being floated that this is probably the last raise pre ipo. I mean, I think that probably was said last time as well, but I'm sure this is the last, last, last, final, final, final. The documents on the fundraising final, final, final.
B
I mean, there's a lot of companies that are still private that I can remember talking to interview processes like several years ago and how they're like, oh, yeah, we're 18 months from IPO and this is like five years ago. And they've raised multiple rounds since then and stuff. So a lot of it depends, I think, on you want to time the public offering to what's happening in the market. And then also there's a lot of things that you have to do to get ready to go public too, which take time.
A
So our main topic today, we're really doing just a deep dive. I say it's a deep dive, but it's both a high level. A high level deep dive, if that's possible. On what does effectively, we've basically seen about 700 billion in AI capex happen since the sort of AI boom. And I guess here we're just taking a pause and just looking holistically, where are things across a lot of the big players, we do this every so often. SCG News just take a pause and touch a lot of the big names and what they're doing and why. And we feel this is kind of important because of just the speed of which things are moving. So keep kind of saying it, but you know, a month literally right now is easily what 6 months might have been pre AI. So try and actually get a handle on what this scale even is. It's like hyperscaler. Capex actually for 2026 alone has been reportedly 650 to 700 billion across Amazon, Google, meta, Microsoft. That was a Morgan Stanley report that has made this guess if you like. And in a single week, Google committed up to 40 billion to anthropic. Amazon committed 5 billion to anthropic. And there was apparently $100 billion in AWS spend over a decade. Nvidia crossed the 5 trillion in market cap level, which again is just hard to fathom at all. And so if you think about it, Anthropic is now simultaneously backed by Google, by Amazon. We've just touched on. It's probably going to be touching a 900 billion valuation. This is just the largest infrastructure investment cycle in the history of technology. It's crazy.
B
Yeah, absolutely. I mean, it's pretty staggering, the numbers. I still remember the days when and people still use this terminology, but we were referring to the unicorn startups that were valued at over a billion dollars. And it's like that started to get silly after more and more companies were valued at over a billion dollars. But that used to be a big deal to be valued at over a billion dollars and now there's so many companies valued over a billion, it kind of dilutes the idea of a unicorn and it kind of becomes meaningless. We probably need to shift that. Maybe it's $100 billion or we're going to get to a trillion dollars in terms of valuation. But there's literally been more money going into AI Compute in the last year than entire cloud built out over an entire decade. It's kind of a strange world where we have Google and Amazon both investing in Anthropic, but they're also competitive with it. It's almost like they're hedging a bet somewhere along the way. In case we don't win the model wars, we still have skin in the game.
A
Yeah. And I mean if we then look at it a little bit more strategically, there is a vertical integration play, if you want to use that slightly, what feels like an archaic term these days, vertical integration. But if you think of model labs becoming actually infrastructure tenants like Anthropics 100 billion AWS commitment. It sort of means that Claude's training and interference potentially structurally tied to Amazon's chip roadmap. So that would be Trainium and Graviton. Meanwhile OpenAI that remains quite despite some news of them starting to part ways more and more still deeply integrated with Azure. Google is both an anthropic investor and building competing models as you've just touched on Sean, like Gemini and that's on its own TPUs which we did a bit on a couple months ago and then meanwhile Meta's confirmed it will use hundreds of thousands of AWS Graviton chips. I think this is the thing bets do have to be made because it is quite difficult to unwind or just shift over the underlying chip infra that these models are being trained on. It's not just oh I'm going to go run it on my other machine somewhere. It's I think analogous to like when Apple did move all their hardware off intel onto their own chips. Like it's just that takes probably like a good year, couple of years of planning to actually go there.
B
This stuff is too big to be fully self contained within one company. So inevitably you're going to get people who are competitive with each other also completely dependent on each other from like chips to cloud infrastructure to the models themselves to like the actual applications and so forth like that. That's just inevitable because it's so big it can't be essentially self contained within one company. We're well beyond that at this point.
A
So if we then take another leap over to I guess the human side, we're sort of again looking at this across multifacets. We have touched on this either last month or two months back really just looking at what does the hiring landscape look like here. But there was some data from TrueUp that 67,000 open software engineering positions across 9,000 tech companies, which has doubled since mid 2023, which was a low up 30% in 2026. And coming back to these conferences we've been at is interesting. I did actually get quite a lot of questions from, I would say especially younger people attending, which I love to see and it's really great to see people still studying or maybe two years out. But they were asking me what do I think about engineering degrees and will engineers be needed. And I did just say absolutely. I mean it's just that the concentration of where engineering will sit will be in these companies that are simply doubling down on AI and the I guess net consumers of these platforms are Maybe going to decrease the reliance on human engineers. But I think that is going to be far outweighed by the net increase by these huge players and all the kind of ecosystem around the huge players needing just more and more engineering.
B
I mean, there's just a lot of stuff to build right now and it's a lot of experimentation. So you need engineers to build it. And I think the responsibilities of an engineer and kind of like what day to day might look like for an engineer is certainly shifting, but clearly there's, you know, hiring's up 30%. I just think it's not business as usual. It's a little bit different. One thing that's interesting too is like, you know, IBM, they're tripling their entry level, hiring around sort of junior engineers. Intuit's also going after junior. And one of the topics of conversation has been around, you know, the agentic engineering and what it means to be an engineer is some companies have focused on, hey, we're only going to hire like senior engineers because we want engineers that have some, you know, maturity in their career so that we can rely on their judgment when it comes to evaluating what's coming out of the agentic engineering productivity tools and so forth. And I think IBM and Intuit and a few others are kind of taking a different approach where they're like, hey, these junior engineers are actually super valuable because they're AI native. They sort of grew up and are adopting this tools and technologies faster. So we want to invest in them. In some ways, I think this is nice to see because I've been a little bit worried about if everybody's hiring senior engineers, what's the next, how do you become a senior engineer in the future and what's that mean for the next tranche of users? I think one disconnect though is a lot of the hype around AI and the things that you hear is like there's going to be massive job loss and engineering is going to go away and we are actually seeing some impact in terms of enrollment into computer science programs. So what's that mean for the next tranche of people who are going to be entering the industry? Suddenly we have not enough engineers to go around if this sort of hiring trend of 30% increase continues to go up.
A
Absolutely. And I think the role types, there's this slightly newish role called technical ambassador that apparently OpenAI are hiring thousands of these and it's really this sort of bridge between what's being built and then almost like solutioning with potentially non technical stakeholders in These companies because the spending power can be so huge, but there's always that massive gap of like, but what actually is this going to enable us to do? And can you show us examples and so on?
B
I mean applied AI is I think that function at Anthropic, for example, I think there's a lot of new forward deployed engineers, which was the concept of Palantir is now like a very popular role. And I think that's a sign of this transitional period that's going on too where companies have capital, they want to deploy behind AI. It's really strategically important for them to. But they don't actually a lot of times know or have the resources and know how to make that thing into something that delivers value to the company. So when they do invest in a particular platform or technology, they kind of need people from that company to come and handhold them to get to a place where they can be successful.
A
Yeah, final macro area on this. Yes, we're going to go back to security for a couple of minutes. We've talked about this several times, but I don't think it's. Sometimes you can never talk slightly too much about security. These tools, they are expanding what we'd call the attack surface faster than actual security tools themselves can keep up. We saw that with the Vercel side of things where basically because they were just going fast with tools like context AI, but there was basically over broad oauth permissions there and would that have been authorized if we weren't adopting so many, you know, AI tools? But instead we're saying, oh, but like I can't move fast unless it has access to everything, especially all the way up to leadership. I'm not talking about Vercel specifically. I don't know the ins and outs there, but I know that leadership generally I think is just under pressure to say yes, because if that's the leadership saying no, no, your tools should stay very scoped and it can't touch your email and it can't touch your Slack messages. Well then what's the point? I need my tools to know everything and that's what keeps me ahead of everybody else. So yeah, it's just one of these things where I think with Cisco's State of AI Security 2026, they're saying that 83% of orgs plan to deploy agentic AI, but only 29% report being ready to secure it. So obviously there's quite a disconnect between what people want to do and then having the means to actually keep up on the Security side?
B
Yeah, it's a huge challenge right now because these like silos and like swim lanes that companies kind of build up around different parts of their company as they grow, become barriers to AI essentially being intelligent and be able to draw interesting results across disparate data sources and things like that. So you want to give the AI system access to those things, but then you're not necessarily set up in a way to be able to do that successfully. So either you slow down and you try to figure out a way to do that in some way where you can control it, or you just open things up and then you take on a lot of risk where you might be exposing information that you don't want to expose. And there's so much pressure for companies to be delivering value around AI and be able to press around it and so forth that there's probably a lot of companies kind of bypassing perhaps their normal standard procedures around even vendor procurement and stuff like that. It's kind of similar to like what we talked about with engineering and the like CircleCI report. There's pressure on engineering organizations to be putting out product faster. But not all the validation and verification of the AI generated code is necessarily there right now. And you either end up not putting out more product because you're spending the time to validate that the thing that you generated quickly is actually working, working, or you skip that step and you're pushing out a lot of code that then potentially leads to further security problems.
A
To kind of recap, we've basically been trying to highlight here that people origins just think of their cloud provider as this mutual infrastructure. But actually it's kind of not if you look at it really. We've got this Google Amazon anthropic triangle with model choices and cloud choices converging. So. So if you are building AI features, most of us are these days, the infra vendors chip investment will probably shape the model performance. So just stay abreast of this because looking at which cloud you sit on and how are the investors embedded, especially on the infra side, I think is going to, if we come back without being biased. But Claude is still having a moment right now and it just seems like if you're not using OPUS for a lot of stuff, then you're kind of being left behind. So that's something to bear in mind.
B
One of the things that kind of relates to across some of the things that we're discussing in the main topic is we have a lot of money. Some subset of that $700 billion is kind of flowing into these CO generation tools. But there's still a significant gap in the downstream validation and testing. And I wonder, I know there's some companies working on that, but in some ways are we kind of ignoring the real problem where we're so focused on the code generation and essentially compressing the time to like PoC? But there's all this work that happens after the POC stage to get that to production and eventually we're going to need some way to accelerate that and also have some confidence that's actually correct in order to take advantage of the speed that we're getting code generated.
A
Absolutely. And then. Yeah, exactly that bottleneck that you've just been touching on, Sean. Not to be a dead horse, as they say, but security, that sort of unglamorous part. But I think the Vercel breach is a really great example of just it really has to be kept on top of. Effectively was a bit of a mission for our team to have to scramble within less than 24 hours to do what they had to do to keep things secure. And I hope that we don't have one of those per week or something to that effect. So I think it is really important that anyone building with AI just has this in mind, I guess. Moving on to what we often think of as our favorite part of the show, hacker News highlights, where we get to just bring in something that's maybe piqued our interest from the last couple weeks. Do you want to kick us off, Sean? What did you come across?
B
I was really hoping I find like a doom running on a lawn mower or sprinkler system or something like that this time. But I went in a different direction. So the first one I wanted to mention was I just thought it was interesting. I love, you know, I come from this background with like my PhD research and so forth. This isn't that complicated, but I just love when people find new ways of sort of representing data in interesting trends and things like that. I'm a big fan of the book Freakonomics, which kind of dives into a
A
lot of these weird numbers.
B
But this is like US gender ratios by Metro, which was posted by N. Sokoloski or something. Sorry. If you're listening, I butchered your name but essentially shows a breakdown of gender ratios by state, city and metro. So some interesting things there. Like Washington D.C. in the U.S. is the most female heavy city. So there's only 45% male, 55% female. And then the opposite end of the spectrum is Colorado Springs where it's like the most male heavy city. So it's a lot of guys walking around in Colorado springs apparently is 55% and then not too shocking in a lot of ways. Like Silicon Valley is pretty male dominant. It's like the third highest rate region in the US and probably is. There's a lot of engineering jobs here. Engineering typically skews very heavy on the male side. So you end up with a lot of maybe a sort of somewhat male dominated area of the world versus other parts of the world and then some cities are like 5050 almost exactly.
A
I find that kind of fascinating. Why say D.C. is actually I guess that's 55% female and 45% male. Yeah, very fascinating. So I wonder if I can find the same for European cities or that kind of thing. I wonder if there's any massive disparities there or something like that.
B
Yeah.
A
I don't know.
B
Be interesting to look at.
A
Yeah. On my side. Yeah I think the first one one of these kind of feel good developer Just a little blog post by somebody so the blog itself is matthewbrunnell.com and this was posted by specx user user specx could be the same person, who knows. But this was called using coding assistant's tools to revive projects you never were going to finish. Which I think kind of is fairly self explanatory but I think it's just nice that pulling a quote from that Matthew said towards the end. In my mind there are different buckets for personal projects. One is things I do to learn and grow and the other is things I really wish existed. This kind of project falls into the second bucket. So using AI coding assistance to reify those projects it's sort of a form of wish fulfillment I never would have gotten to but now I can have the project I think now I can have the project that I wasn't able to have before. And one less metaphorical book sitting unread on the bookshelf a bit of a long but I think we can all identify with that things that we started and quite frankly it's not that we didn't want to finish it but things get in the way and we did obviously see that like oh this is going to be way more time consuming than I expected and it would be fun but I just simply don't have the time. So that's pretty cool. We can actually even if we're not coding day to day but able to bring these tools and I can think of at least five projects that probably fit that bucket. I just should quite frankly Hook up Claude code to and be like, hey, here's where I was trying to get to finish it off. Please upgrade some packages, secure the whole thing and there we go. I think that's probably something I'll be doing on a lot of travel I've got for the next couple of weeks.
B
Yeah, I think that it's almost like a subclass of that second bucket where you have these project ideas that you're like, oh yeah, I would like to do that, but I just don't even have the time to start it because it's going to take too much time. But now you can kind of prompt your way to doing that fairly quickly. I was using Claude to build some games for my son based on ideas that he had and I certainly could do that. I had the programming skills to do that. But like, you know, it would have taken a reasonable amount of time and energy to crack that out for some little game that he may or may not even play with versus being able to take his ideas and turn it into something and have Claude kind of churn away at it and pump it out and then, you know, like have him explore. It's also a good way for him to sort of see how, you know, you can turn ideas into something that like manifests itself into like a computer software or some sort of product or something.
A
What was your second one, Sean?
B
Yeah, so the second one was this headline that came up this week which was Granite 4.1, which is IBM's 8 billion model, which matched their previous 32 billion mixture of extraordinary model. So they have three different open source models, one's 3 billion, 8 billion and 30 billion under Apache 2 trained on 15 trillion tokens. I think it made a lot of headlines because they were able to match the performance of the 32 billion model with the 8 billion billion across nearly every benchmark. It was really the way they were able to do that was they really focused on data quality over the parameter scaling and they had kind of this like five phase training pipeline to do that. And I think that's really interesting because we've over the last year, plus when it comes to models, a lot of times we talk about the limitation of the models. Like we're running out of data to train the models on, like where are we going to get all this data? And I think one of the things that they were able to show with this is that, hey, we can really actually drastically improve the quality of the model while keeping the model size reasonable if we really focus on data quality. And during the training process and during reinforcement learning and so forth. They were also really open about some things that went wrong in training and kind of how they fixed that too, which I think was, you know, a little bit different and kind of refreshing.
A
Yeah, that's fascinating. And just like, again, the leaps of progress is just insane. So, yeah, that's really cool. So, yeah, I guess thanks to Steve Herringone user for putting that in. Yeah, I think on my side, the second one, at least when I found it okay, it hadn't gotten tons of points. I wonder if this is one of these Hacker News articles where. I'm not sure if it's well known about this, but you can submit an article to Hacker News and it might not do very well. You'll get 12 points or something. But every so often someone at Hacker News will actually reach out to you and say, we're going to repost this because we think that this is really interesting and that this should actually get more airtime. So it's kind of interesting. That happened to one of mine. I posted something about remote controlled telescopes. Like you could bring your telescope to this facility and put it somewhere and then they would store it and run it and you could remote into it. And yeah, it didn't go anywhere the first time I posted it. And then AgroNews said, hey, we're going to repost it, and it went to the top, which is kind of interesting. So I think it could be one of those where it's only got 28 points, but it was on the front page. It's called Cheating at Tetris. So the website is chalkdustmagazine.com and the user was T3. Nice short name there. So the TLDR here is like, if you can imagine, you get to pick which Tetris pieces your opponent must play. Could you force them to lose? Basically. And so the article works through this basically mathematically, as you'd expect from how do you cheat at Tetris? So you've got these, I think everyone kind of roughly maybe remembers Tetris. All these shapes that fall from the top of the screen, you need to rotate them. And the whole point of the game is just if you make a total line, that line disappears. But if you don't make a complete line, it sort of starts to stack up. And then if you hit the top of the screen, so the game over. So you've got these different piece types, like different shapes, roughly. You can think of them as the letters I, J, L, O, S, T, Z. They can be played sort of indefinitely without causing game over. So picking just one piece won't work. But if you were to just mix just the S and the Z blocks, since they don't actually fit neatly together, they say that the best the player can do is to create separate columns of each. But because the board is 10 cells wide, fitting exactly five 2 cell wide columns, you always end up with an odd split three of one, two of the other, which causes an imbalance that slowly fills the board. So the article goes on to identify which piece combinations can mathematically guarantee a loss regardless of how well the player plays. So I do encourage you, if you're a Tetris fan, to go and look at that. It's very hard to explain all the permutations, obviously on a quick Hacker newshighlights section here. But yeah, chalkdustmagazine.com, go there and you can check out that kind of fun piece. So yeah, I guess looking ahead, for the first time in history, Sean and I are probably going to meet in person, which is going to be fun. We're going to grab coffee hopefully this weekend in sf. But apart from that, very exciting event, anything else prediction wise over the next month? Sean?
B
I don't know. I don't have a really good prediction. I mean, I could go with the lazy one that I did last time, which is we're going to have more security breaches, which I think will actually be the case. But I don't have an out there prediction this week.
A
Yeah, unfortunately security was hovering in my mind as well. Just like what do you sound like a sort of broken record at this point with that one? But I often look around the room and see where I'm looking at. For example, I've got a power bank which I'm aware does have some processing power in it somewhere because of its USB C capabilities. So I'm going to say that somebody hacks DOOM onto a power bank like a power plug that has USB C outputs. So someone's running Doom on a power plug or something like that. That would be kind of fun.
B
I guess the one tying back to one of the topics we talked about is the anthropic Mythos model. Is it going to get more widely available by the time we talk next?
A
Yeah, that's a good one. That's a good one. So yeah, maybe I'll just take a bet that they open it up to say 10 non hyperscalers. But the 10 are slightly. People have a lot of opinions about that choice, for example. So let's see how that goes. Thank you everybody for tuning in. Hope this has been helpful and just interesting getting to catch up on what's been going on in tech. So much is happening, so always useful just to have us try and condense it for you and give you a quick summary. So yeah, thanks for listening and we'll catch you next time.
B
Thanks everyone.
A
Cheers, Sam.
Software Engineering Daily | May 7, 2026
Hosts: Gregor (“A”), Sean (“B”)
This SED News episode offers a rapid-fire roundup of the past few weeks’ most important headlines in software engineering and tech. Highlights include Anthropic’s controversial “Mythos” model rollout, a deep dive on recent supply chain security incidents, and a macro analysis of the spiraling AI investment boom. The hosts also explore the human side of tech layoffs, changes in the engineering job market, and end with their favorite recent posts from Hacker News.
“Really smart people in the Bay Area, they come from all over the world and congregate here. I think that’s amazing.” — Gregor [03:13]
“If you tell everyone it's too powerful to make it available, it's terrifying. And then when you let the biggest companies use it, it ends up driving...a lot of demand because people want the thing they can’t have.” — Sean [06:15]
"We haven’t seen the sky fall yet, basically.” — Gregor [09:23] “It’s a humbling moment for humanity... Now we have a model that can find flaws in open source that essentially thousands of really, really gifted engineers have looked at—and handcrafted—and been blind to it.” — Sean [09:23]
"This must have affected tons and tons and tons of people. So it’s just...” — Gregor [12:14]
“Companies consistently fail to make secure by default the actual default. It’s kind of a simple concept but it’s missed over and over again.” — Sean [13:43]
“If we’re five months from now and every month we’ve reported another Delve-related data breach, then I will join your circle of conspiracy theorists.” — Sean [17:18]
“Layoffs... It was just a purely financial thing that is, unfortunately, seems to be part of the landscape as of the last five years with big tech.” — Gregor [26:19] “Stock goes up after saying, ‘Hey, we’re laying people off and we’re focused on efficiency.’ That creates a cascading effect.” — Sean [21:27]
Meta was more explicit in stating layoffs would help offset investments in AI infrastructure.
AI Capex Surge:
“This is just the largest infrastructure investment cycle in the history of technology. It’s crazy.” — Gregor [29:37] - “There’s literally been more money going into AI Compute in the last year than entire cloud build-out over a decade.” — Sean [29:59]
“It’s not just ‘oh, I’m going to go run it on my other machine somewhere.’” — Gregor [31:21]
“If everybody’s hiring senior engineers, what’s the next...how do you become a senior engineer in the future?” — Sean [34:04]
“If you're building with AI just has this in mind...I hope we don't have one of those [Vercel-scale breaches] per week.” — Gregor [41:41]
1. US Gender Ratios by Metro
“Washington D.C. in the U.S. is the most female heavy city. So there’s only 45% male, 55% female. The opposite end...Colorado Springs.” — Sean [42:57]
2. Using Coding Assistants to Revive Abandoned Projects
“It’s sort of a form of wish fulfillment I never would have gotten to but now I can have the project.” — Matthew Brunnell, quoted by Gregor [45:14]
3. IBM Granite 4.1 Model Performance
“They were able to match the performance of the 32 billion model with 8 billion...because they really focused on data quality.” — Sean [46:48]
4. Cheating at Tetris (Mathematically)
On Mythos and Security:
“It’s a humbling moment for humanity...the Kasparov vs. Deep Blue moment for bug discovery.” — Sean [09:23]
On AI Infrastructure Investment:
“This is just the largest infrastructure investment cycle in the history of technology. It’s crazy.” — Gregor [29:37]
On Layoffs:
“Layoffs...It was just a purely financial thing…nothing really to do with anyone’s skills.” — Gregor [26:19]
On Secure-by-Default:
“Companies consistently fail to make secure by default the actual default.” — Sean [13:43]
On the State of the Industry:
“There’s literally been more money going into AI Compute in the last year than entire cloud build-out over a decade.” — Sean [29:59]
Full episode: SED News – Anthropic’s Mythos, Supply Chain Hacks, and the AI Spending Surge | May 7, 2026