Loading summary
A
Welcome to the Practical AI Podcast where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work and create. Our goal is to help make AI technology practical, productive and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn X or Bluesky to stay up to date with episode drops, behind the scenes content and a insights. You can learn more at PracticalAI FM. Now onto the show.
B
Welcome to another episode of the Practical AI Podcast. I'm Chris Benson. I am a principal AI and autonomy engineer. And today we have a special guest who has been a previous guest from a couple of years ago. I want to introduce, if you haven't already seen the episode or recognize him up front, this is Congressman Don Beyer of Virginia, who is, in addition to being a Congressman, has an incredible background in AI, which is obviously why we're having him on this particular show today. Welcome back to the show. It's great to have you, Chris.
C
Thank you. I'm flattered that you invited me back with second time.
B
Well, the first time was very inspirational. I know it's not the primary topic, but like I remember one of the things that really had an effect on me was you were in a PhD program at George Mason University in AI and I would bet that most members of Congress don't delve into such things. And so whether you like it or not, I think that makes you the coolest member of Congress, period, the fact that you're doing that. So thanks for coming on the show to talk a bit about the world of AI and how it touches you and in your, in your primary job.
C
Yeah, thank you. It's really fun. And I'm spoiled because I live so close to the Capitol, you know, in Northern Virginia is right across the river. So I don't have to be on an airplane for eight or ten hours a week like most of my fellow Congress people do.
B
Fair enough. Instead, fair enough. You got, you got those extra few hours to work on that PhD program there. So. And we got a lot of feedback from that when we were on a couple of years ago. Really positive. So anyway, welcome back. You know, the landscape of the world has changed dramatically since the last time we talked to you where we have a new, a new administration that's, that's in versus President Biden was in back when we talked. We now have President Trump. We were talking about a whole set of public policies that were that were in kind of being developed at the time. And I know that that has changed. This administration has thrown out a lot of the work that had been done prior and kind of gone their own way. I was wondering if you could kind of start out by just kind of laying the landscape as you look at it, as an AI expert who is in Congress, how has the world changed from your perspective? What's the same, what's different and how is that, how has that changed how you're looking at things and acting upon them?
C
Well, Chris, almost nothing's the same. Just because AI is accelerating so very, very quickly. In terms of the Trump administration itself, it's sort of a mixed bag. On the one hand, the new President threw out Joe Biden's executive order on AI, which was incidentally the largest executive order ever written by a president. But he, but then he instituted his own, which was largely the same. Probably the most important thing for me, and I think for most people on the planet, is that Donald Trump saved the Safety Institute out at nist, the National Institute for Standards and Technology. He renamed it, it's called Casey. Kept a lot of the same people, you know, changed the leadership, but that's normal. So at least there is some safety perspective within the current administration. He brought in, you know, David Sachs to be the aizar, and Mike Kratzios to be chief science advisor. That was interesting because it basically brought in two business people rather than scientists who made a lot of money, both of them in Silicon Valley doing that. That was different from the scientists that he had before. You know, the, for example, the head of OSTP was a distinguished scientist out of Stanford. The, there was a push in the first year or so of the Trump administration towards, you know, full blown accelerationism, you know, no new laws, no restraint. From the defense perspective, the intel perspective, it's like we have to beat China. But it was also mixed because then at the same time, Trump decided to sell a bunch of the, you know, the H200 ships to China, which was opposite of what Biden did. You know, Biden tried to restrict China's growth in AI by withholding the best Nvidia chips, and Trump reversed that for. For other reasons. So it, it's a complicated scenario. And then with the anthropic battles over the use of anthropic in Iran, or even more so with Mythos introduction a few weeks ago, it actually didn't come to you and I, but it was to me. But it was laid out what was happening with it. All of a sudden There was this wake up call within the administration that AI is progressing so quickly it could endanger all of the cybersecurity measures that American companies and American government put in place over the decades and that they really have to pay attention to the security, the safety sides of artificial intelligence.
B
Yeah, it's Mythos in particular, with kind of the shock. It was interesting. There was the, you know, the administration was very much in battle with Anthropic for a little while. Then Mythos came out. They seem to be backing away from that slowly. At least that's how it seems coming across in terms of the comments out of the White House. I'm kind of curious from more the AI perspective on this. As we have seen each advancing model and supporting technologies such as the various agentic harnesses and stuff like that come out. Once something is out, you continue to have development, you know, from other companies and stuff like that. How do you think the. With Mythos coming out and the. And the scramble to. To address safety concerns that it brings, but the recognition that you're probably going to have other models that are able to have similar capability evolving over time from various companies, whether they're domestic or. Or foreign. How is that, you know, are folks thinking about such things, about the evolution from Mythos on how. What is the safety picture there?
C
Yeah, Chris, I think, I think very much so. You know, I had been in the more comfortable position of thinking that all of our protections against, you know, cybersecurity and cruisers, you have your password protection writ large with multiple layers of protection that, that we were going to be safe until quantum computing came. But no, you know, one of the things that Mythos did was look deeply into how the, the protection efforts were created and, and began to unravel them, unfold them right away. And yeah, sure, though, if, if Anthropic can do it, you got to figure OpenAI and Gemini and stuff are close behind. The Chinese tool thought that Anthropic was very responsible in giving the. The code to a handful of people to anticipate how we were going to have to strengthen our security devices ahead of any widespread, widespread dissemination of the Mythos software. Hopefully we can get a running start before the other people catch up. But it's like any arms race. It's not going away. This is going to be, you know, step by step accelerating for the indefinite future.
B
Yeah, I mean, that. Just that. And there are so many topics to hit here, but with Mythos and as other competing capabilities come out and some may just be released without any, there's a point here where Anthropics talked about kind of holding it back for a while, but I think the presumption is that Mythos will be generally available at some point and likely competing models. So it kind of seems to be changing. I know, coming from the AI, you know, where AI merges with the software development world. It's definitely changed how everyone's looking at their cyber position.
C
Oh, absolutely. And it would not be surprising, Chris, if there is a wholesale rethinking of how cybersecurity works. If we look and say that the tools we've used are not going to be effective anymore, do we step back two or three or four steps and think about what do we need to protect? Even the question of what do we need to protect? Do we choose to protect much less and do we protect it in very different ways? Yeah, it's phase shift, really big picture.
B
Absolutely. So as we step back to kind of the, you know, the notion of government, we're seeing a lot of conversation around regulation. There is an ongoing debate about where regulation should occur, whether it's at the federal level or at the state level here in the United States. And, you know, a little bit of a struggle. Can you talk a little bit about how you see that landscape in terms of regulation? Who should be doing what at what levels of government and how, you know, what. What is a sensible approach to that? And are we. Are we doing that or not?
C
Well, I. I think in the largest picture, the sensible approach is a new conge. Geneva Convention where we get together with the Chinese and the Europeans and the people in the Middle east and anyone who's doing important work and try to figure out guardrails that work for everyone. As people have fairly pointed out, if we have this beautiful regulatory system in the United States and China has none, that is not going to work in the long run. In the meantime, we have a more continental concern, and that's should the federal government do the regulation or can state and local governments do it? And I think I'm very sympathetic to the argument probably best made by my friend Jay Obernolty, who's a Republican congressman from California, that we shouldn't have the Tower of Babel with Virginia's regulations be different from California being different from Texas's. However, at the federal level, right now, we've basically done just one bill, and that was Ted Cruz's Take It down act, which, you know, gave. Gave us the ability to. When somebody puts evil sexual imagery of you, Chris, up on, on Facebook, you
B
can horrify the audience with that, oh
C
my gosh, you can demand that it takes it down and maybe even have a cause of action to sue whoever put it up.
B
That's a good thing.
C
Only one that we've done. And you're familiar with the bipartisan task force that Mike Johnson, Hakeem Jeffries had. We had 80 specific legislative recommendations and so far we've done one of them. We look at Congress's total inability to do anything on social media over these last two and a half decades and say yes, we need a national framework, but in the absence of one, we should not restrict state and local governments from doing the best they can. And there are interesting things. HP53 In California, you know it it, I, I believe that the governor vetoed it or amended it, but it became law is an important first step in understanding how to regulate the artificial intelligence. There's a guy named Alex Boris in New York, a member of their Albany assembly, who again is out there trying to think of really important ways where a state can make a difference. That shouldn't be the end game, but it's probably a good place to start. The three way line is that state governments and laboratories of democracy, they can move much more quickly. They don't have filibusters and things like that. And then maybe we can learn from, I guess over 700 pieces of local of state legislation on AI are out there right now to build where we go. I hope it doesn't take 10 years. Maybe it should take two or three years. What it will take, Chris, is an administration that wants to press forward with meaningful light touch regulation at the federal level. That's not come from President Trump and this administration yet. They've not. We do a lot of stuff on not taxing tips or overtime, but nothing on AI regulations and time.
B
Yeah. So is, I'm, I'm kind of curious. It raises a question is there like I, I'm failing to see it. Is there something partisan inherently in AI and this is where acknowledging I am not a political figure and that's not where, where I'm spending my, my thinking time, but is it perceived as a, as a partisan topic in general? And you know, in, in the large,
C
I think there's a danger that it does it that a little way. You know, our little task force was completely bipartisan. We're trying to do everything as bipartisan as we can because this really affects every person. It should be much more like, you know, our defense posture, which has typically been very bipartisan, our foreign policy posture that the One thing that complicates it, Chris, is that typically Democrats have been more inclined to regulate and Republicans have been much more inclined to deregulate. And so when you use the word regulation or even the idea of putting restrictions around, what artificial intelligence can do, can be how it's used, it's gonna, it's gonna stir a little bit of that D versus R. But we have to do what we best we can to overcome it. Gotcha.
B
It's. Could you talk a little bit about when you talked about some of the, you know, we out here, you know, that, that are not in government and not, you know, we see it on the tv, we see it on the, on the browser and stuff, but, you know, I think we hear so much divisive language. And you talked a little bit about the ability of Democrats and Republicans to kind of come together and get, and try to, you know, try to agree on topics. You talked about foreign policy within AI. What, what, if any, are some of the, the strengths of, of, you know, you see both sides of the aisle working together. Are there any, first of all, and is that something that, that is working at any level? I recognize that the current administration kind of wants to go their own way in a, in a variety of topics, but is there any silver lining there?
C
I think so, Chris. I mean, I certainly want there to be. Let's just think, for example, one of the big concerns with AI is surveillance. You know, that. And this came to the fore, especially when Doge came in and copied a bunch of Social Security and tax records, loaded them, we think loaded them into Grok Elon Musk's thing. I know my Republican friends well. I'm a Democrat. For those who didn't know that, the last thing they want is a central government that knows everything about them. I mean, that's the reason my Republican colleagues have not wanted gun registration over all these years, because that means that the government then knows exactly who owns what gun. And they can take it from me. I don't think they want the government to know every thing about us, you know, our habits, what, what book we read, what time we go to bed, you know, tracking the. Our location, devices in our cars or our phones. So I think there's agreement there. Certainly there's agreement on the sexual imagery, the misuse of generative AI visually, et cetera. And then I think where there's probably the greatest concern right now is what about the job displacement. We're all familiar with Dario AMADE's predictions about 25%, 50% white collar job displacement in the next two to five years. You know, the different numbers are out there, but we all know that this is not going to be at the speed of the agricultural revolution or the industrial revolution, which took place over decades or a century. This could be two to five years. I had a speech opportunity this morning with the enrolled agents of America, all the folks that do our taxes, CPAs and small accounting firms. And it was very relevant to them that all of a sudden if all those functions like accounts payable and accounts receivable and payroll production are all agentic AI handled, what does that do to, not necessarily to their jobs, but to the people who work for it and just extend that through the entire 18 million white collar workers we have in America. And once again we did a terrible job of adapting to the job dislocation in the manufacturing sector that came both from trade and even more from technology. So you have all these wiped out former manufacturing towns, especially in the Midwest, but around the country, you know, our so called trade adjustment assistants didn't do a very good job at finding them new ways to be productive, to have the dignity of work. And that's a big challenge that both Dems and Republicans are facing.
B
And what's the thinking? I mean that is a topic and we talk about it on this, on this podcast all the time. In terms of concerns over jobs, if I have, you know, we have holidays and you know, come Thanksgiving, Christmas time when we're having extended family around and, and none of which in my extended family are AI people other than myself. But that is certainly the topic that, that you know, everyone is worried about. And within our family we have an array of different jobs that people are in. Some are blue collar, some are white collar. Is I guess one of the things kind of channeling some of the questions that I get that I cannot answer from my own family is Congress, is government at large, is it thinking much about these problems? And where does regulation fit into this or things other than explicit regulation? How are we considering that there is a worry at some level of this being a major issue for many families going forward? Kind of. Where are we at on that? Where is Congress at on that? Could you share any of your thinking or your perceptions about that?
C
First of all, there are initiatives in Congress, Mark Warner and a fellow Republican, I'm not sure who, maybe Thom Tillis in the Senate, I'm the co lead with, I can't remember who in the House on a bill for a commission on the future of the economy specifically based on this one question. What do we do if AI displaces massive amounts of other workers? And I don't think that the tendency is towards regulation. You hear people saying, well we should just say you, you can't use this AI technology to eliminate this job. That's probably not even plausible. Instead we're like what are the investments we have to make to make sure that people still have first of all a means to live and then second of all, and not unimportantly, something meaningful to do with their days. Yeah, the great optimists, and I am a major AI optimist, can foresee a world with extraordinary abundance. You know, Nick Bostrom's latest book on an AI utopia is worth reading that basically if economics is the science of the allocation of scarce resources, we have a lot of things all of a sudden that may not be scarce. First of all, just look at clothing. This is unscarce as it's ever been. You go back two centuries and everyone wore the same, same set of clothes year round. With all the energy, the fusion plants that are being built in America right now by Helion, by Commonwealth Fusion, the 44 companies are racing to be the first. You know, in our while we're still young, energy can be abundant, ubiquitous and low price. So what's going to be scarce and that's that where that scarcity is is where humanity will probably go. Obviously you know, it could be teachers, it could be care workers, it can be much everything, much more bespoke than it is right now. But then again it may be that there's only a subset of the American people that fit into those new high touch human relationship type jobs and, and then what do you do? Some of the, as you know, some of the AI czars talk about UBI which you know works, works well in Alaska but in general I think most people don't want to be paid to do nothing. An interesting conversation with Jeff Hinton last week who suggested if there really is that much abundance, let's just start with universal healthcare. Then that free universal health care that that takes one worry off of most people's plates and still leaves lots of room for them to work and be productive in other ways.
B
It would. I think I get the sense on along this topic certainly I know a lot of our audience does software development and other other AI tangential jobs and I think you know with the when when Opus came out from Anthropic late last year at the 4.5 level I guess in late November and it and Claude code was out same time through last year and gaining steam is I know that there was a perception that we were kind of experiencing and living and that the 2025 way of writing software, which was very human centric, may have, may have AI assistance through various agents that were there. But writing software in 2026 has been a different experience. I think most people have, have been recognizing that, that kind of, that almost a pair part, you know, a pair programming paradigm with your AI model would do that. And I think aside from the technical aspect of that, I think there was a really strong psychological impact of this thing that we have been worried about or that we've talked about for some time has actually arrived and we're having to change our behaviors and change how we approach our own careers to accommodate. That's, that's obviously only one white collar job out of many, many that can be affected and stuff. But I know there was in software development circles, there has been quite a lot of conversation around that in terms of upskilling, I think. And I ironically, I think that that's an area where people can upskill fairly easily. If they were willing to get into software development, they probably can upskill. Well, what about when we talk about jobs where people may struggle a little bit for various reasons with upskilling? Maybe it's the level of education they currently have or whatever and they need to step up into that. Is there any thoughts around. We've talked about universal basic income and stuff like that, but just the idea of being able to change something that you've been set in, in your ways for a long time and get to a new reality as people are adjusting to this, this rapid AI innovation that's occurring. Any thoughts around that? And I say that as you know, someone who, who you know, jumped into a PhD program yourself, you know, you're upskilling your own skills. Any guidance, any, any suggestions that people might, might take?
C
Well, a couple of threads, Chris. One is I think that as a society we will be much richer. You know, I really do believe in the abundance. I think one of the challenges we have, and I don't have an easy solution, is what we're already seeing as the, the concentration of resources in some segments. You know, the rich are getting very, very rich. You know, all the billionaires indeed. And then we have a lot of people left behind. I, I don't want to project in this podcast how we redistribute the income. Any, any income redistribution comes with enormous social problems. We can't leave two thirds of the people behind in this, that everyone has to be able to share in the abundance that's created by artificial intelligence. Then beyond that, the fact that the, the scarce resource again is going to be services rather than things. We're all going to have enough things is that we don't have to be burdened down by routes and location. If you're in Johnstown, Ohio and the steel plant closes or the auto plant closes, it's tough to move. You live there all your life, you own your home there, your family is there. You can't just pick up and say, okay, I'm moving to Charlotte to get a new job. And which is why the left behind places have suffered so much when you don't have to move because things are much more relational and even information based. It may well be that we can see growth in, you know, non urban America, you know, suburban, small town, middle sized town America based on this, which I think would be a very good thing. In fact, even in Virginia, you know, we're finding most of our growth in population is happening in rural Virginia and that's being made possible by electronic communication, by all the communication systems that are out there. A lot of this post Covid. Look, 20% of Americans work from home now, Chris. That's very different.
B
I, I'm, I'm ca. Yeah, case in point for me certainly most of the time. So, you know, it's a very optimistic and I, and I, I like, I like hearing that there's so much doom and gloom around this topic. I really appreciate you kind of sharing kind of what might, what hopefully will be path forward on that with some of the, with some of the other concerns that people have beyond just the jobs arena, if you will, and kind of going into things like misuse of AI and I guess that there will be people on both sides of this equation. But we mentioned surveillance in passing a little while ago. Could you talk a little bit about kind of, you know, where, you know, what your thinking is around. You know, we already have, we've had mass surveillance for a number of years in different capacities. You know, going all the way back to the Snowden revelations where many Americans became aware of different levels of surveillance that maybe they hadn't before. At this point, as we're looking at AI enhanced surveillance and you know, where's how that relates to civil liberties and how that relates to law enforcement and other tangential topics. It touches on so many things. Can you share a little bit of your thinking around that and what you're concerned about and what you're not concerned about? Maybe on, on on AI enhancement of surveillance and, you know, the step up of what's possible.
C
Yeah, it's interesting, Chris. You open the whole door of abuse or just downsides. Surveillance is clearly one of them. We already know, you know, because of various things happening in Congress, they set us all up with delete me accounts. And you look at the number of data brokers out there that have enormous amounts of information about you and about me and the whole notion that all this information, every time you accept cookies or anything else, you're creating an ever greater profile of who you are that can be purchased by many, many other people, notion that we are private people now is getting me more and more of a fiction. And I don't think that's good for our citizenship, for our own security. I know my wife hates it, and how we reverse it is not clear. I keep searching the Internet for people doing interesting things. Tim Bernier Scott, the guy that created the World Wide Web, has a really interesting project he's working on trying to create. He calls it a pod, which would have all of Chris Benson's information in it and you'd have to have permission to access it or even maybe pay to get into it. And we begin to monetize our own personal information. And every else is so, so used to using. When you look at people like Meta, where the whole business model is getting information about you and me and then selling. Yeah, but that's one the other. Just thinking of other abuses. We already see how very sophisticated the fraudsters are right now. The amount of trillions of dollars that old people especially lose to the people who scam them. And AI makes those scams ever more sophisticated, ever harder to determine. I seem to get one or two Evites a day from good friends that I make sure never to open because I noticed that I'm on a bcc, but I know that once I open it, I have once again opened my system to somebody who wants to try to get into it. Yeah, finally. Well, not finally, but we, We've. We talk a lot about what Anthropics debate with, with the Tanya with Pete Hegset over the use of AI in autonomous weapon systems sort of. From the beginning. Both DeepMind and OpenAI and Anthropic have all said they did not want their, their AIs used for autonomous weapon systems, that there needed to be a human in the loop. But now we're facing, first of all, an administration that doesn't seem to want a human in the loop and the reality that down the road, China, India, North South Korea, North Korea rather could all have autonomous weapons. This is where they actively choose not to put human in the loop. So if you have a weapons system with a human and a weapon system without a human, who do you think's going to win? You know, it. So the, the moral and the ethical questions here are very deep and problematic.
B
Yeah, I think a lot, a lot of folks, I, I think we grew up over the last few decades so used to a company putting products and services out and having a terms of service, you know, that goes with that. There is a, you know, a license or something. And I think, you know, I. A lot of folks that I've talked to didn't understand like what the problem was with the government complaining that a company put out a term of service. Every company that has products and services puts out, you know, how you can use those products and services if it's
C
licensed in any way.
B
The government, more specifically the administration, seemed to have quite a concern with that. And I guess why not just if it's going to be a problem for them, why not just choose another vendor that has terms of service? I think the, the fight that we saw was a little bit confusing between the administration and the company because the company wasn't doing anything, at least to the way I see it. Unusual. It had a saying, we have a product or service, you can use it in these ways, don't use it these ways. Facebook does this like every company does the same. They have do's and don'ts with their services. Was this more of an ego thing, do you think, from administration or, you know, why did they take the very antagonistic perspective that they did? I think that's been something I've been in a lot of conversations about, about why did this even happen? Why not just say, okay, we can't use it that way. We're not going to use it. We'll use it according to what the terms of service are.
C
Well, I, I think you've seen with, with Hagseth at the Defense Department, you have somebody whose motto is kill, kill, kill. And a very aggressive posture towards everything. Very much bull in the china closet. So to be. To be told no was unacceptable to, to him. And so he just came right back at him, you know, poor understanding of how business worked. And I think there's some interesting pieces written one by Dean Ball about, you know, Hegseth's approach to anthropic threatened the entire basis of our private property system. What does that mean? You don't own your company anymore? The one that you built. And, and, and they seem to be slowly working it out. And, and of course, Defense Department turned right away to OpenAI, who was more than happy to provide them with the services that anthropic didn't.
B
Yeah.
C
Yeah.
B
So it, I guess that lack hes particular approach to people whom he don't, I guess, give in to his particular, you know, actions that he wanting is. It seemed very, it was just very curious. It seemed to me like it was a normal business process that went off the wheels.
C
So you just have to look at all the generals and admirals that he has fired in the last year for, for, for crimes unknown. It's been, it's been a, it reminds me of what Stalin did in the 1930s in Russia. Yeah.
B
Yeah. I guess as we look forward at what Congress might do in terms of legislation and, you know, what are, what are, as we've talked about, some of these really big problems that are, or big issues at least, that we have to navigate and find better waters down the road to navigate so that things like surveillance are not pervasive. We're not in a 1984 world that we're not losing lots of jobs as we do that. How, how can Congress impact this going forward? And how does that relate to the international component, since these technologies obviously don't stop at national borders? You know, we've talked a little bit about Chinese and others. You know, how do you, when you're in Congress and you guys are. These are big, hard problems, how do you approach them? How do you, how do you think about making this work in the long run as you, as you kind of go through the bumps in the road?
C
Well, my thought, Chris, is sort of be like you're standing on the beach and you watch the waves come in, and it's coming in a whole different lot of places at the exact same time. That Congress is inherently incremental. Occasionally we do big things, but I think we're likely to have a whole variety of small bills. For example, the AI Foundation Model Transparency act, which just sets transparency requirements for the large AI models for the five or six big guys and just insist that they safety test them. Doesn't set the safety tests, but it makes sure that the example we use is the FDA doesn't allow drug companies to sell drugs unless they've been tested extensively. An AI could come out, roll out a new large language model, and we have no idea what it's been trained on or whether it's been tested at all. Notice there are little steps along the way, cleaning up things like watermarks, trying to make, protecting intellectual property, but the, but along those many different things hitting the beach. I'm hoping at the same time that the Department of State is talking with the folks in Europe and the folks in China and the folks in India about how we all come together to think about AI regulation, that we look to the states and local governments and say what's coming out of Richmond or Annapolis or Topeka that is useful for all of us at the federal level. And then at some point, Chris, we also need to at least touch on existential risk because I find that beyond job displacement, which is immediate, the existential risk is something that virtually every one of my friends, constituents is concerned about. I don't want it to be real. You know, I, I, you know, we're not necessarily worried about the terminator, but rather you have such a poor understanding of where consciousness comes from. I've read many books about consciousness. I've yet to find one that says, oh, here's how it emerges. But we do know that it's an emergent property, that I think it was Craig Muncie who told me that if you look at the human brain, unless you believe in intelligent design, no one designed the human brain, which is the most immensely wonderful complicated machine. Instead, it evolved over millions and millions, hundreds of millions of years. Now with AI, we know it's going to design artificial superintelligence, but it may well grow out of what we have already created. It's evolving already. And when that happens, you run into the alignment problem, right? How do we know it's going to want what we want? I know people, very smart people, are working on trying to build alignment into the machines right now, but it's something we all need to be thinking about.
B
Worrying about it is a challenge. And like you, I have read a lot on consciousness. And for listeners who don't, there are many theories of consciousness, but there is no agreement on how it emerges as an emergent property. There are many dozens of possibilities. And there's also just to call out one other thing, there's the distinction between intelligence and consciousness. And we certainly are in the realm with these technologies that we're talking about, about intelligence arising without consciousness, certainly where you're having an intelligent capability that is computed, that is able to do, be productive in a particular job, you know, at this point in a superhuman context. But nobody knows consciousness yet. There is no agreement on that. One thing I have noticed is recently is reading something yesterday is that I see a lot of people when they're talking about this, making assumptions that people tend to put their personal bias into what consciousness is. I was reading an article last night before I went to bed about that and so I think that's a really interesting thing that we need to navigate is what is consciousness and at what point will it arise and at what point will we recognize that it has arisen, that it is, that it is present and potentially, you know, to your point, you can have it come into being without us really realizing it's there. I'm not saying that we're there today, certainly, but I'm just talking philosophically about this is definitely a big problem to navigate and the concern of when that happens, will it recognize, will it be aligned with us in terms of its best interest versus ours and what are its capabilities? How, how do, do you have any thoughts, Congressman, about how we start to address those kind of concerns at this point?
C
Yeah, well, understanding is by far the best way. One book recommendation for, for your listeners, Chris, is it's called Met Metazoa M A T A Z O A I don't remember who wrote it. It's got a kind of hyphenated last name. A British it's, it's the evolution of consciousness from the first one cell animal through us today. It's a, it's a really fun, interesting science read but you get to the end of it and realize why is it going to stop with us? You know, why are who, who says that we're the high point and the end point of this? And then I saw a piece yesterday, Richard Dawkins, who famously doesn't believe in God, does believe that Claude is already conscious.
B
I read that as well. And a matter of fact, I believe that the article I was referring to last night was a counterpoint to that article actually, where somebody was offering a criticism of it. But I think the existential question any thoughts on how people might frame that? I think that's one of those questions where people don't even know how to approach it. Beyond.
C
Yeah, and you look at the pause letter from the 700 people from two years ago.
B
Yes, I remember.
C
Yeah. The Damas Hassabas who founded DeepMind didn't sign it. Jeff Hinton, who won the Nobel Prize with Hasabis, didn't sign it largely because they didn't think that it would work. That you can't, you, you can't pause the entire world or every scientist or every thinker out there.
B
And I, I, I will admit that that was my take is aside from the merits of the Actual effort where all these luminaries signed a letter saying we should stop this kind of development. The world is so diverse in terms of interests and personalities and politics that I thought there's just no chance that that alone is ever going to make it. Any thoughts on how we.
C
Mark had an interesting thing. Pope Leo had gathered a bunch of the best minds back end of last year and they came out with this short statement which got a lot of attention that said we shouldn't build artificial intelligence superintelligence until A we know we can control it and B there's actually public demand for it. But it's nice to have the statement, but it's hard to know how to make that actionable. Who's the should? That's not going to do it. Totally agree. We are so hungry for the science. We as human beings, we are so aspirational for something new and better. It's just who we are, you know,
B
that raises, as we're starting to wind up here, that raises a question I wanted to ask before we, we get to the end and that is we're in a moment in history where science is kind of on the, down in, in the public's consciousness. There's been a lot of push down on trust and stuff like that. Any thoughts on how does that have a broader impact as we talk about these AI topics? Does the fact that certainly current administration, but there's, there's a lot of folks out there that just trust in science is, has been degraded and which I personally find sad. I think that that's not doing a service for mankind at large. How does that if at all affect AI? Any thoughts on just that general down moment in science?
C
That's, that's a, a good question. I don't know I have any kind of good answer on that. I know as a member of Congress, as a Democrat, I've been very dismayed by this administration's approach to the investment in science of slashing the university research budgets. National Science Foundation 55% cut. They cut CDC and NIH, eliminated the science departments at, at NOAA, at EPA, cut NASA's in half. That this is not an administration that believes that the scientific structure we had before was meaningful. Which is very, very sad. But it is every time you read one of those articles about the scientists who made up all his or her data that that, that destroys, destroys trust a great deal.
B
It does.
C
But I, I'm looking forward to leaders, including US Presidents who lift up science for its extraordinary importance in our lives. The wonderful world in which we live today has only been possible because of knowledge and science. And the great excitement about artificial intelligence is unfolding for us every day. Just think Alphafold, right, With all those protein structures, our understanding of the universe in which we live, and hopefully that has to lead to a better lives for all of us.
B
Well, I can't think of a better way to, to wind things up than that. That is definitely, that's, that's definitely an inspiring way to finish up here. Thank you so much for coming on the show today. Great insights, really appreciated learning a little bit more today like I always do when we talk. And, and good luck with, with kind of leading the way on, on the U.S. congress, trying to make things a little bit better for all of us on both sides of the aisle with AI and, and other related things. So appreciate your service, sir.
C
And Chris, you end up reaching way more people than I do. So thank you for doing the Practical AI podcast and putting all this good information out week by week for somebody.
B
Thank you very much.
A
All right, that's our show for this week. If you haven't checked out our website, head to PracticalAI FM and be sure to connect with us on LinkedIn X or BlueSky. You'll see us posting insights related to the latest AI developments and we would love for you to join the conversation. Thanks to our partner, Prediction Guard, for providing operational support for the show. Check them out@prictionsguard.com also thanks to Breakmaster Cylinder for the Beats and to you for listening. That's all for now, but you'll hear from us again next week.
Date: May 14, 2026
Host: Chris Benson
Guest: Congressman Don Beyer (Virginia)
In this episode, Congressman Don Beyer returns to discuss the rapidly evolving landscape of artificial intelligence in the U.S. and beyond, focusing on national policy, the implications of emerging models, job displacement, security, regulation, surveillance, and existential risks. With his unique perspective as both a Congressman and an AI PhD candidate, Beyer provides candid insight into the bipartisan challenges and opportunities presented by AI—and the sometimes conflicting priorities of policymakers, the tech industry, and the global stage.
On the risks of AI militarization:
“From the beginning... DeepMind and OpenAI and Anthropic have all said they did not want their, their AIs used for autonomous weapon systems... But now we're facing... an administration that doesn't seem to want a human in the loop...” (C @ 27:21)
On job displacement and reimagined abundance:
"Some of the AI czars talk about UBI which... works well in Alaska but in general I think most people don't want to be paid to do nothing... Jeff Hinton last week... suggested if there really is that much abundance, let's just start with universal healthcare." (C @ 18:43)
On incremental versus sweeping action:
"Congress is inherently incremental. Occasionally we do big things, but... likely to have a whole variety of small bills." (C @ 34:09)
On consciousness and emergent risks:
"We have such a poor understanding of where consciousness comes from... but we do know it's an emergent property... Now with AI... it may well grow out of what we have already created." (C @ 34:09)
The conversation balances caution with optimism. Beyer is clear-eyed about the speed and magnitude of the challenges: regulatory logjams, the threat of misuse by state and non-state actors, the disruptive impact on labor, and the gnawing unknowns around consciousness and autonomy. Yet he maintains hope for bipartisan cooperation, global diplomatic solutions, and an ultimate abundance fueled by AI—provided society finds inclusive and ethical paths forward.
For listeners desiring actionable insights:
Endnote quote:
"The wonderful world in which we live today has only been possible because of knowledge and science. And the great excitement about artificial intelligence is unfolding for us every day." – Congressman Don Beyer (C @ 43:06)