Transcript
A (0:00)
Foreign.
B (0:08)
Welcome back to Firewall. I'm your host, Bradley Tusk. With us today is our friend, my partner at Tusk Ventures, and frequent guest Bob Greenlee. Bob, thanks for coming on.
A (0:19)
Thank you for having me. Great to be here.
B (0:21)
So the topic of this episode is AI regulation. And the genesis was I'd written a substack a couple weeks ago about sort of a taxonomy of how to think about AI regulation. And then Bob sent me his taxonomy, and they were different enough that between sort of the conceptual question of how do you even go about something like this, and then the practical, you know, what do you need to do and who's doing it well, and who is it and whatever else, it was enough, I thought, to make for an interesting episode. So that's what we're going to talk about. It's okay. I think I'll start off by just laying out the taxonomy as I see it. And then, Bob, you do yours, and then we'll start debating it. So the underlying point here to me is that I have been in Politics now since 1992, and I have never seen an issue like this in that it is so pervasive, not meaning, like, media attention, but just meaning in all the different areas of society that it can and will touch. And therefore, when you think about regulating something, typically it's a specific issue. It is health care, it's education, it's energy, it's whatever it might be. And you're working within the construct of that regulatory framework. And then if you think about our work on the venture side, it's typically a portfolio company that is doing something that hasn't been done before. And then the question is, how should it be regulated? And you have the incumbents and the insurance interests usually trying to stop it. We're trying to move it forward. Sometimes there is no regulatory framework for it at all because it's a white space area and you're trying to create the right one. So but even then, it's still underneath the overall aegis of a specific vertical and topic. And the thing with AI is it touches everything. It touches consumers, it touches businesses, it touches homeowners. It impacts our energy system, it impacts our health care, our education, our economy, our safety. It involves catastrophic risk. It has a huge impact on unemployment numbers and the job market. It has the ability to do a lot of good. And so the way that I structured the taxonomy was four categories. The first is consumer protection. And this one is sort of the easiest of the four, because we already have a system of consumer protection that exists Mostly at the municipal and state level, sometimes at the federal level, depending on the agency, where the idea is basically as you would imagine, a company or an entity is doing something, and if it is seen as harmful or fraudulent in some way, the government stops it or creates rules around it. So I see a couple of areas right now where AI is being regulated from consumer protection standpoint. First are chatbots. I think we've seen about a dozen states this year pass or about to pass legislation. And it's all kinds of things. What can and can't chatbots do. Preventing sort of sexually explicit content disclosures they have to make, providing mental health advice, especially to teenagers and things like that. And those have been regulated and on a very bipartisan basis, both in terms of the votes in the states that have done it, but also the types of states, meaning both red and blue. Second is a topic we've talked about a lot on this podcast. Bob and I have talked about this on this podcast before, which is energy costs for consumers thanks to data center energy consumption. And it seemed almost inexplicably that the plan of the hyperscalers was just to build these data centers and plug into the grid and, you know, let everyone share the cost. Which is insane, because why would your average homeowner want to have a 30 or 40% higher electricity bill, especially when they're already worried about losing their job because of AI, simply to help Sam Altman become a trillionaire. Right. Like that doesn't make any sense. And so again, you've seen bipartisan support within states, and then both red and blue states, and I think we're at something like 37 states now have at least introduced legislation to regulate energy costs, typically meaning that they can't be passed on to everyone else in the grid. That could be through providing on site power. It could be through special types of purchasing agreements. It could be through using more energy efficient forms of compute, which is something that Bob and I are working on a lot. We can get into that. Third in consumer protection is around hiring and the use of AI in making hiring decisions. This has been on for a while. So at Touch Strategies, we passed a bill for a company called Pymetrics, which was an AI hiring company. They've since sold. I don't know if they're still using the name Pymetrics or something else now. And it was in the New York City Council. The bill was around kind of use of racial demographics. And the reason why we wanted it was because our biggest competitor didn't have that particular functionality. It was a very controversial bill, but it became kind of the basis for other states. So that's consumer protection. The second for me is catastrophic risk and harm. And it's tricky because, you know, states are doing it because the federal government has yet to regulate Internet 2.0 social media, let alone AI. So they're taking the lead, and I certainly don't mind that at all. So California, New York, Colorado have all passed bills that deal with frontier models, and I think they're decent bills. You know, we worked on the raise act here in New York, but, you know, the catastrophic risk of AI seems to change constantly. You know, Claude sort of talked about Mythos a couple of weeks ago. That felt like a totally whole new level and change in the potential risk of AI. Mythos is supposedly able to sort of hack anything. And so, you know, it's one of those areas where it's not just a question of whether you have regulation or not or a partisan view on it, but you need sort of a living, breathing structure, I would argue, that can evolve as the technology evolves. And, you know, the catastrophic risk is very high, whether it is AI, you know, controlling missiles that it could launch without any sort of human authorization. AI either, you know, figuring how to construct a bioweapon or enable, you know, crazy person to do so or anything else. Third category for me is jobs. In this case, it's more about harm prevention. Again, kind of like catastrophic risk, which is what we're already seeing significant layoffs because of companies using AI. Facebook just announced or Meta, that they're going to lay off 10% of their workforce because of AI tools. And you can't really prevent. I know the left will try, but you can't really prevent companies from becoming more efficient. Right. That doesn't make any sense. So the question then becomes, what do you do with all the people who lose their jobs and how do you help them? The typical response that I hear from politicians is job training, which is their answer for everything. But the problem is training them in what? You know, I'm an early stage vc. I see all the companies that purport to be the company's industries of tomorrow. And I couldn't tell you what you would train people in. And we're not going to have 200 million plumbers and H vac technicians. And so the only good idea that I've seen, and I want to talk to get Bob's take on this is our friend Daniel Schreiber, who's going to come on the podcast at some point, talk about this is the co founder and CEO of Lemonade, an insurance tech startup that we invested in and worked with. And he was in the, he lives in, in Jerusalem, but he was in New York A few weeks ago we had lunch and told me about a white paper that he had funded with the idea of taxing companies on their incremental profits from fewer headcount and then redistributing those profits through. He called it the negative income tax. You could also call it universal basic income to the people who were laid off. And his argument was it kind of works really elegantly because as the layoffs go up, so does the revenue tax revenue and therefore the ability to take care of the people who were laid off. So that's three and then four for me was all the good things of AI, right? How can you use AI to make government work better and make society work better? And I gave two specific examples of our portfolio. One is going to be called Doctronic. Thanks to Bob and Marlo Kanemitsu, they received the first license ever to do prescription of medication via AI. This is in Utah. And if you think about it, you have presumed for a doctor, you need a refill. It's a pain in the ass. Either you got to track them down and call them and wait for them to call you back or set or hold for 30 minutes or you have to go to an urgent care center to kind of get something, just have them do it instead. And it's just a huge waste of time and productivity. And in a system where affordability and accessibility is by far the number one issue, you know, this improves both of those. So there are things like that. Another one's called Hazel. So they do procurement right now for government procurement they're working on, they use AI for that. They're working on private sector verticals too. And you know, the way it works is imagine you're a school district, you need pencils. Now you run low, someone has to notice. Then they have to tell someone else, who tells the legal department, start drafting rfp. That takes however long it takes. They review it, someone has to sign off on it. You're waiting around, it finally gets sent out. But it goes to like the eight pencil vendors they know those pencil vendors could easily be doing bid rigging or anything else. Then when they're responded to, however long that takes goes back to a committee of people who if they're corrupt, mean they're just stealing from the taxpayers outright. But if they're honest, which is the case in my experience, most of the time, they're still human. So they have bias and friendships and rivalries and all this other stuff that gets in the way. And it's just a wildly inefficient system that ends up leading to way higher cost for taxpayers, a lot of wasted times. I mean, more bureaucrats than you need on the payroll, and sometimes corruption itself. And so areas where I can help people. So to me, those were the four categories in the way to think about this. Bob, you approached it a little differently. What's, what's your take?
