Loading summary
A
You're listening to a podcast right now. Driving, working out, walking the dog. If you're into podcasts, chances are you have something to say too. With RSS.com, starting your own podcast is free and easy. Upload an episode and we distribute it to Apple Podcasts, Spotify, Amazon Music and more. Track your listeners, see where they're from, and start earning from ads just like this. If you've been thinking about starting a podcast, this is your sign. Start your new podcast for free today at@rss.com you're listening to a podcast right now.
B
Driving, working out, walking the dog. If you're into podcasts, chances are you have something to say too. With RSS.com, starting your own is free and easy. Upload an episode and we distribute it to Apple Podcasts, Spotify, Amazon Music and hundreds more. Track your listeners, see where they're from, and start earning from ads like this. Even with just 10 listeners a month, if you've been thinking about starting a podcast, this is your sign. Start free@rss.com welcome to the podcast.
C
I'm your host, Jaden Schaefer. Today on the show, we want to cover the Q1 2026 venture funding numbers. They've just dropped. They're pretty eye watering. There's about $300 billion that has been invested into startups in this single quarter and 80% of that is going straight into AI startups. I want to break down all the numbers of who's getting money, why it's going where. We also have to cover Meta's rogue AI agent incident. It triggered a really serious security alert internally. And I want to talk about a very cool story when it comes to AI in healthcare. One of my favorite topics, there's something called Noah Labs. It just got FDA Breakthrough, which is basically designation for an AI that can detect heart failures from a five second voice recording. Very cool. And then we also are going to Georgia where they are sending three AI related bills to the governor's desk today, including a chatbot child safety bill, which is interesting and we're starting kind of to see this broader wave of AI chatbot regulations that are going through state legislatures right now. There's 78 bills across 27 states. So there's a lot to get into before we jump in. Quick mention if you haven't checked out AI box yet, it's something I've been using pretty regularly. It gives you access to over 70 AI models in one place. So instead of paying for separate subscriptions to chat, GPT, Claude Gemini and everything else, you. You get it all for 9 or 8.99amonth. The thing that I actually like about it is that there is an automation builder built in. You can describe what you want in plain language. So I'm not a developer. You don't need to know any coding or any sort of technical setup and it builds the workflow for you. So I've been using it for a bunch of my own content production stuff and it has been incredibly useful. If you want to try it out, there's a link in the description. It's my own startup AI box. AI. All right, let's get into the episode. So I think something that's been building quietly over the last few months is that there's a massive wave of AI chatbot regulation at the state level. According to the Future of Privacy Forum 2026 tracker, there's now 78 different chatbot safety bills that are introduced across 27 different states. And I think a lot of those follow a very similar playbook. They're kind of focused on disclosure requirements. There's a bunch of stuff that, you know, plays towards child safety protection. Um, there's limits on what chatbots can do when they're interacting with minors. I think the common thread across basically all of these bills is that if you're talking to an AI, you should know you're talking to an AI. I think a lot of people are concerned, you know, particularly when you're doing something like customer support or sales, and they're like, look, if I'm talking to a customer support bot, like, let me know. It's not a, you know, not a real human. And of course they want a lot of safety for children using AI chatbots. So states like Oregon, Hawaii, Colorado, Arizona and Nebraska all have versions of this moving through their legislature right now. I think. I think what's interesting is that it isn't just coming from kind of one political direction. You're seeing conservatives and progressives, both of these are kind of pushing these bills. I think the concern about AI and kids is one of the really rare cases where there's kind of genuine bipartisan energy. Whether all 78 of these are actually going to become law. I think that's definitely a different question, but I think the direction is pretty clear. States aren't waiting for Congress to act. And I think for the AI kind of companies out there, this patchwork of state level regulation is going to become a really big compliance nightmare. You could end up with different kind of disclosure rules in every state, which I think is exactly the kind of thing that eventually forces federal action. So all of these states are putting it out there. I personally don't believe that all of these states are going to be able to regulate all of this in the future. I think the government, the federal government is going to come and Congress is going to come and just pass laws that are like, okay, these are all of the laws as far as, like, you know, disclosure of AI. These are all of the laws as far as disclosure and kind of guardrails for children. And then everyone's going to have to just adopt that instead of having to work through a patchwork of all the states, which could be a nightmare for sure. Okay, Georgia right now is sending three different AI bills to the governor. Speaking of that, there's three AI related bills. They all have made it to Governor Brian Kemp's desk. The first one is SB540. It's a chatbot disclosure child safety bill. So this is kind of a good example. The second one is prohibiting healthcare insurance coverage decisions from being based solely on AI. So an insurer can use AI as part of the process, but a human has to be involved in the final call. And. And the third is SR789, and it basically is creating a study committee to look into AI's broader impact on the state. So, I mean, that one's just kind of a catch all. But I think the insurance bill is one that's very interesting to watch here. That's why I wanted to bring this up, because we've seen a lot of stories over the last year about health insurers using AI to deny claims, and they're doing it at scale. Georgia is essentially saying that, you know, not without a human in the loop can you make those types of decisions. If all of these get signed, Georgia's gonna become one of the more active states on AI regulation. I think I'd expect other states to look at insurance bill as a template because it kind of ties into this broader theme we've been tracking, which is like, AI is getting deployed in some of these really high stakes decision areas. And I think the pushback on fully automated outcomes is getting a lot louder now. At the end of the day, these insurance companies can just say, hey, look, you know, we have a human in the loop for a lot of these decisions. Or maybe they click a little checkbox that says, yes, I accept the AI's decision. And so it's not going to be just like 100% automated. So I don't really know how much disclosure we'd ever get into that or how much of a change it's actually going to make, if I'm being honest, you know, despite how noble a cause that bill might be, I think it'd be pretty easy to get around and just say, hey, look like we use AI tools to help us. I mean, algorithms and software to help us with everything and you can't really kill that. So yeah, sure, a human will click the checkbox that says denied claim based off of this, but the AI is really doing a lot of the thinking because software does a lot of the thinking in most of these things. So I don't really know if it's going to make that big of an impact. The next thing I want to talk about is NOAA labs. They just got FDA approval. It's a big breakthrough for voice based heart failure detection. This is craz easy for me. Essentially they have a product called Vox and it analyzes a five second recording of your voice and it can detect signs of worsening heart failure. The way it works is that AI extracts acoustic features from your voice that are linked to pulmonary congestion and fluid overload. And then it's basically going to have, you know, be detecting these physical changes in your chest and your throat that affect how you sound. It'll be able to notice this before you even notice symptoms a lot of the times. So their algo algorithm was trained on over 3 million sample voices and it's been validated across five multicenter clinical trials with partners including the Mayo Clinic and ucsf. I think this is definitely a big breakthrough, but this designation means that the FDI FDA right now can see enough promise in this to basically expedite the review process for this company. They're expecting EU approval by middle 2026 and a US clinical trial to kick off soon after. I think this is one of those, like stories that shows you AI in healthcare is super exciting. It can actually look, it can be like, it can work really incredibly when it's done correctly. It's not replacing doctors. It's basically giving patients a way to monitor themselves at home without something, without having to go in and kind of take all of this time and money. I think about 6 million Americans are living with heart failure right now. And so I think this kind of early warning system could be really significant for a lot of people. And I also don't think it's only going to be in, you know, voice detection, heart failure. Like you could roll this out to many industries, many different medical problems. So personally very excited about this.
B
All right.
C
We got to talk about Meta's rogue AI agent. Meta's had a bunch of these AI agent kind of snafu moments lately. This one was kind of crazy. Basically an employee posted a technical question on an internal forum. So inside of Meta they have these kind of internal forums that they can ask questions for other developers. And then a different engineer asked an AI agent agent to help analyze the question. The agent then generated a response to this and it posted it and it did this without asking the engineer for permission, which obviously is an issue. And the original employee then took actions based off of the agent's guidance. Right, so, so are we tracking this? Right. Developer says, hey, what do I do about this? Another developer agent goes and tells the developer what to do about it. Then they go and actually do the thing. So there's like human in the loop for part of this, but it's the agent that's like poisoning the. Poisoning the chain of command here. But basically the reason why this is a big deal, because normally that's, you know, would be whatever, is because when the employee took action based off of what the agent told it to do, they inadvertently exposed massive amounts of company and user related data to engineers who were not authorized to access it. So the data was exposed for about two hours before it was caught. Meta classified this as a SEV1 incident, which is their second highest severity level. And this wasn't even an isolated thing. One of Meta's own safety directors, Summer Yu, had previously shared, and I think I've talked about this on a podcast in the past, a lot of people are talking about this, but she kind of previously shared that an AI agent she was using deleted her entire inbox, even though she explicitly told it to confirm with her before taking any action. So there, there are some issues happening here. I think the broader numbers are pretty crazy. Hidden Layers 2026 report found that autonomous agents now account for more than one one in eight reported AI breaches across enterprise. And a separate CISO report found that 47% of organizations have observed AI agents exhibiting unintended or unauthorized behavior. I think this is kind of where the gap between, like when you do these things as a demo and you actually are deploying them, it kind of really shows up. The agents are very impressive in controlled settings, but when you put them into a real enterprise environment with real data and real permissions, if something goes wrong, it can go wrong in a big way. Now what I will say is I think there's definitely an issue here with how, just honestly, what AI model you're using and how smart it is. If I was doing, you know, some, a lot of the tasks that I do with Claude Cowork, for example, and I was using kind of like an older version of Claude or even ChatGPT, it's just not smart enough. I sadly not trying to, you know, throw Meta under the bus, but their meta AI is kind of in that camp with some of their llama models. They're just not as smart, they're not as capable. And so if you have, you know, if you're trying to use them to do these kind of highly important tasks, it's going to mess up, it's going to hallucinate. Now, I think a lot of these problems and not all of them have been actually solved by some of the latest versions of Claude. Opus 4.6 has been amazing. I have not had any crazy incidents or any crazy kind of hallucinations or problems in the last week. I've built two websites and two full sasses in the last week using Claude Cowork and Claude code. I'm not a developer, I don't know, you know, anything about the code, but I've been able to do some really crazy, phenomenal stuff. No hallucinations. It sets up my servers, it sets up my cloud. It's been doing some amazing stuff. So like, on the one hand, I do like these stories because I think it's interesting and important to know and it's kind of like, look, watch out. But at the, but the, at the other side of this, I do feel like a lot of people will use these stories as kind of these AI doomerism. Like, yeah, that's why you can't trust AI without a human in the loop. And look, I'm not trying to be any sort of way, but I honestly believe we're going to have to trust AI without a human in the loof. Whether we do or we don't, it's going to get rolled out and it's going to actually do a good job, I think eventually, and by eventually I mean like within the next six months, many, many, many tasks will be completely automated. This is just from the tools I'm seeing on the ground and what I'm using. So anyways, don't want to go all AI doomerism. Sometimes that's kind of, I guess, funny. Or maybe this isn't funny, having a big security incident like this at Meta. But, but if you're using bad models, that's what's going to happen. If you use some of the best models I just don't see this happening very much in the future. I think this is something that'll be more and more rare. Okay, the next story I wanted to dive into is the Q1 2026 venture funding numbers are out. They've shattered all records at $300 billion. So this comes from Crunchbase. They publish this every quarter. And I think this is basically unlike anything we've ever seen. Investors poured about $300 billion into startups globally in the first quarter of this year. That is up 150% both quarter over quarter and year over year. So to put that into context, this single quarter accounted for about 70% of all venture capital invested throughout the entire year of 2025. Guys, this is crazy for people that are like, oh, you know, it's just a bubble. It's, you know, they're only giving it to the big companies. This is getting spread out to everybody. And I think the AI concentration is where it gets really interesting. About 242 billion of that $300 billion, which is 80% went to AI companies. So the previous record for AI share for venture funding was about 55%. That was in Q1 of last year. So we beat it this year. And it went from 50 to 80%. You know, that's pretty wild. I think when you look at where the money actually went, it's heavily concentrated in the top AI companies. Now I know I did say earlier, like, look, this is pretty spread out and I think comparatively, like if you look 10 years ago and how much money is getting spent in venture and where, like this is very, the amount is still getting very spread out to a lot of the smaller companies. But they're just, I mean, smaller companies are going to raise, you know, 50 million, 20 million, smaller, smaller check sizes. And then you, it's just everything's going to feel like it gets dwarfed by the big companies. So OpenAI, for example, took a huge chunk of this. You know, they just raised $125 billion. Anthropic raised 30 billion. Xai raised 20 billion. Waymo raised 16 billion. Those four rounds alone account for about 180 billion or 65% of all global venture investments for the quarter. So obviously OpenAI's big, you know, blockbuster Mammoth round is going to kind of put a, put a big weight on the scale. But I think the geographic concentration is also really interesting. US based companies raised 250 billion, which is 83% of the global total. And that's up from 71% a year ago. So the investments are getting more concentrated in the United States. And I think it's overwhelmingly late stage. 246 billion went to late stage rounds with 235 billion of that going to companies that are raising 100 million or more. So what does this actually mean? I think if there are a few things worth kind of pulling apart here, first is kind of the obvious, right? The AI industry is attracting capital at a pace that has no historical parallel, right? This has never happened in tech before. Not, you know, during the dot com boom, not during the mobile era, not during crypto. Nothing comes even close to this. And second, the concentration is really significant. When 65% of a quarter, you know, of a quarter's global venture investment goes to four companies, I think it tells you that there is, you know, investors are really thinking about this market in a specific way. They believe they're making big bets and they believe that the foundational model layer is going to be basically dominated by a small number of players, and they're willing to write these enormous checks to make sure that those players have the compute and talent that they need. The third thing that I kind of pull out of this is something I think gets overlooked a lot, and that is basically, you know, what does this mean for everyone else? If you're a startup that isn't building foundational models, raising money right now is actually harder than the headline numbers suggest because the average deal size is being massively inflated by these mega rounds. So if you strip out the top 10 deals, the venture landscape looks a lot more normal, maybe even a little tight, right? Then there's also the question of whether this level of investment is sustainable. $300 billion in a single quarter like that is a ton of money. But the AI companies that are raising this money are also burning through it on compute, on talent, on data. Like they're not building huge war chests. They are burning this money OpenAI anthropic xai all of them are in an arms race, basically for the next generation of models. And the capital requirements keep going up. It's not like we're, oh, look, we figured out how to make this more effective or more efficient. We can spend less money to train the models. At some point, these companies are going to need to show returns that justify this investment because, I mean, these investments are absolutely massive. They're not there yet. Q1 of this year might be the quarter people start to look back on as either the moment that I think investments truly took off, or maybe the moment that things got a little too hot. I'm not sure. I don't think it is a bubble in maybe the traditional sense because the technology is real, right? The adoption is real. A lot of people are using this. OpenAI is doing $2 billion a month right now, but I think that the concentration of capital and just kind of these handful of frontier labs is creating a dynamic that is definitely worth watching carefully. The gap between, you know, who gets this money and who isn't getting this money is getting bigger. This has a lot of implications for competition, for innovation, for what kind of AI products actually make it to market. So these are all things to look out for. Guys, thank you so much for tuning into the podcast today. If you enjoyed this episode, make sure to leave a rating or wherever you get your podcasts. It helps the show a ton. It helps me be able to make all of these podcasts for you. And as always, make sure to go check out AI box AI if you want to get access to over 80 of the top image, audio, video models Sora that will soon be discontinued because it costs $133 for a 10 second clip before it's discontinued, go check it out at AI Box AI. Let me know what you guys think. Thanks so much for tuning in and I'll catch you in the next episode.
B
This is the story of the One As a procurement manager for a hospital system, she keeps every FAC in her network stocked and ready. That's why she counts on Grainger to be her single source for thousands of products, from disinfectants to lighting, air filters, and more. And with fast, dependable delivery, Grainger helps her keep every facility stocked, safe and running smoothly. Call 1-800-GRAINGER click granger.com or just stop by Granger for the ones who get it done.
The AI Podcast – Episode Summary
Episode: Q1 2026 Venture Funding Hits $300B Record
Host: Jaden Schaefer
Date: April 6, 2026
In this episode, host Jaden Schaefer breaks down the record-shattering Q1 2026 global venture funding numbers, with a particular focus on the unprecedented surge in investments into AI startups. The episode also explores the latest surge in state-level AI safety and chatbot regulation, an FDA breakthrough approval for an AI-driven heart failure detection technology, and a major security incident at Meta involving a rogue AI agent. Jaden provides data, analysis, and personal insights throughout, offering listeners a comprehensive view of this momentous period in AI development, investment, and regulation.
[02:05–06:32]
Wave of Regulation: There are now 78 chatbot safety bills introduced across 27 U.S. states, focusing primarily on disclosure, child safety, and limiting certain AI capabilities when interacting with minors.
"According to the Future of Privacy Forum 2026 tracker, there's now 78 different chatbot safety bills that are introduced across 27 different states."
— Jaden Schaefer [02:22]
Bipartisan Energy: Regulation is coming from both conservative and progressive lawmakers—children’s safety in AI is one of the few genuinely bipartisan concerns.
Common Threads:
Compliance Challenge:
Jaden warns of the fractured regulatory landscape:
"This patchwork of state level regulation is going to become a really big compliance nightmare."
— Jaden Schaefer [04:23]
Federal Action Impending?
Jaden predicts that the increasing complexity will force a unified federal response.
[05:42–06:32]
Georgia sending three AI-related bills to the Governor:
The insurance bill in particular could serve as a national model.
"Georgia is essentially saying that, you know, not without a human in the loop can you make those types of [insurance] decisions." — Jaden Schaefer [06:02]
Jaden notes practical limitations—that “human in the loop” rules could be easily circumvented:
"A human will click the checkbox that says denied claim... but the AI is really doing a lot of the thinking."
— [06:24]
[06:32–08:25]
Noah Labs' Vox System:
Recently awarded FDA Breakthrough designation for a system that detects heart failure from a five-second voice recording.
Technology Details:
"Their algorithm was trained on over 3 million sample voices and it's been validated across five multicenter clinical trials."
— Jaden Schaefer [07:25]
Patient Impact:
"AI in healthcare is super exciting… it's basically giving patients a way to monitor themselves at home without having to go in and take all of this time and money."
— Jaden [07:49]
[08:25–12:14]
Incident Details:
Pattern of Issues:
Jaden references prior agent failings, including one that deleted a Meta employee’s email inbox without consent.
"An AI agent she was using deleted her entire inbox, even though she explicitly told it to confirm with her before taking any action."
— [09:07]
Industry-Wide Concern:
"Agents are very impressive in controlled settings, but when you put them in a real enterprise environment, if something goes wrong, it can go wrong in a big way."
— Jaden Schaefer [10:10]
Model Quality Matters:
Jaden asserts these issues are more prevalent with “less smart” models (e.g., Meta's Llama, older GPTs).
"If you're using bad models, that's what's going to happen. If you use some of the best models I just don't see this happening very much in the future."
— [11:59]
Future Prediction:
"I honestly believe we're going to have to trust AI without a human in the loop... it's going to actually do a good job, I think eventually—and by eventually I mean like within the next six months, many, many, many tasks will be completely automated."
— [11:16]
[12:14–17:38]
Unprecedented Scale:
"Investors poured about $300 billion into startups globally in the first quarter of this year. That is up 150% both quarter over quarter and year over year."
— Jaden Schaefer [13:30]
AI Dominates:
Mega Rounds:
"Those four rounds alone account for about 180 billion or 65% of all global venture investments for the quarter."
— [14:40]
Concentration & Impact:
"If you strip out the top 10 deals, the venture landscape looks a lot more normal, maybe even a little tight, right?"
— [15:45] "The gap between who gets this money and who isn't getting this money is getting bigger. This has a lot of implications for competition, for innovation, for what kind of AI products actually make it to market."
— [17:23]
Sustainability Question:
"They're not building huge war chests. They are burning this money... all of them are in an arms race, basically for the next generation of models."
— [16:35]
Broad Context:
This moment dwarfs previous tech investment booms, including the dot-com and crypto booms—investor behavior signals belief in AI’s foundational market potential.
"I think the concern about AI and kids is one of the really rare cases where there's kind of genuine bipartisan energy."
— Jaden Schaefer, [03:40]
"AI in healthcare is super exciting... it's not replacing doctors. It's basically giving patients a way to monitor themselves at home."
— Jaden Schaefer, [07:49]
"Agents are very impressive in controlled settings, but when you put them into a real enterprise environment with real data and real permissions, if something goes wrong, it can go wrong in a big way."
— Jaden Schaefer, [10:10]
"Investors poured about $300 billion into startups globally in the first quarter of this year... this has never happened in tech before."
— Jaden Schaefer, [13:30]
This episode highlights the extraordinary moment for AI—record-breaking investor enthusiasm, major technological breakthroughs in healthcare, deepening state and national questions about accountability, and real-world risks as AI deployment accelerates inside enterprises. Jaden offers a balanced view: acknowledging exuberance and systemic risks, while maintaining optimism about the pace and direction of AI progress. The episode is a vital snapshot for anyone trying to follow the business, technological, and societal stakes in 2026’s AI landscape.