
How can we ensure technology evolves ethically in a rapidly advancing world? Neil deGrasse Tyson, Chuck Nice & Gary O’Reilly explore the challenges of designing a future where human values and AI coexist with The Future of Life Award’s 2024 recipients, Batya Friedman & Steve Omohundro.
Loading summary
T-Mobile Representative
After investing billions to light up our network, T Mobile is America's largest 5G network. Plus right now you can switch keep your phone and we'll pay it off up to $800. See how you can save on every plan versus Verizon and AT&T. @t mobile.com Keep and switch up to.
T-Mobile Terms and Conditions Announcer
Four lines via virtual prepaid card. Allow 15 days qualifying unlocked device credit service ported 90 plus days with device and eligible carrier and timely redemption required. Card has no cash access and expires in six months.
Progressive Insurance Representative
This episode is brought to you by progressif, where drivers who save by switching save nearly $750 on average. Plus auto customers qualify for an average of seven discounts. Quote now@progressive.com to see if you could save Progressive Casualty Insurance Company and affiliates national average 12 month savings of $744 by new customers surveyed who saved with progr between June 2022 and May 2023. Potential savings will vary. Discounts not available in all states and.
Gary O'Reilly
Situations.
Progressive Insurance Representative
With incredible new features and upgrades, there's never been a better time to armor up and play the Critically acclaimed action RPG Diablo 4. Get ready to continue the epic story in your battle against evil in the new expansion Vessel of Hatred. As D darkness spreads through the lands of Sanctuary, it's up to you to fight back the encroaching corruption. Reap the benefits of massive updates to character progression, loot systems difficulties and tons of added activities. Face off against new and iconic bosses teeming with the spoils of Sanctuary. Plus harness formidable powers of the jungle with the all new Spirit Born class now yours to customize and progress alongside five other iconic classes. Embark on the epic journey solo or with friends. You'll quickly understand why Diablo 4 has been called one of the best action RPGs of the last decade by PC Gamer. Forge your own path through hell, torn lands of Sanctuary. Get Vessel of Hatred, Available now in the Diablo 4 expansion bundle. Rated M for mature.
Neil DeGrasse Tyson
I'm glad somebody's thinking about the future of our civilization and the ethical guardrails it might require. Yeah, Lest we be the seeds of our own demise.
Chuck Nice
Well, now that we've had this show, we know the future. And y'all gonna have to watch to find out exactly.
Neil DeGrasse Tyson
All right, coming up, StarTalk special edition welcome to StarTalk, your place in the universe where science and pop culture collide. StarTalk begins right now. This is StarTalk Special Edition. Neil DeGrasse Tyson, your personal astrophysicist. And when we say Special Edition, it means I've Got as co host Gary O'Reilly. Gary.
Gary O'Reilly
Hi, Neil.
Neil DeGrasse Tyson
All right, Chuck. Nice, baby.
Batia Friedman
Hey.
Chuck Nice
Hey, what's happening?
Neil DeGrasse Tyson
Hey, man. So I understand today because our special edition themes are always people. Our physiology, our behavior, our conduct, our interaction with the world. And finally, we're talking about technologies being safe and ethical.
Gary O'Reilly
Yeah.
Neil DeGrasse Tyson
These are three words you don't often see in a sentence. Safe, ethical, technology.
Gary O'Reilly
And this is where we're going to shine our light.
Neil DeGrasse Tyson
Where are you going to take us today?
Gary O'Reilly
It seems we have problems, starting with the letter A. Algorithms, AI, Autonomous. Well, there's three for you. Is there a wild west tech bubble in play right now? One with no guardrails, no moral compass that's run by wannabe Bond villains? Are the best of human values baked into technologies during the design process? Is there anyone working on ethical and safe operating protocols for these new technologies? The answer is yes. And we will meet two people responsible shortly, courtesy of the Future of Life Institute that has this year acknowledged their work. Previous honorees include Carl Sagan for popularizing the science of the nuclear winter. And our first guest who follows shortly is Batia Friedman.
Neil DeGrasse Tyson
So, Batia Friedman, welcome to StarTalk.
Batia Friedman
Well, thank you.
Neil DeGrasse Tyson
Yeah. If I have my data here is correct on you, professor at University of Washington's information school Uwe, I think you guys call it. Is that correct?
Batia Friedman
That's right. Yep.
Neil DeGrasse Tyson
But you're also co founder of the Value Sensitive Design Lab. Ooh, very interesting. You're thinking about the human condition. You focus on the integration of human values and ethics with new technologies that are being born.
Chuck Nice
Very important.
Neil DeGrasse Tyson
Don't come to it when it's too late. Yeah, when we're all extinct, is it? Maybe we should have done that different.
Chuck Nice
Right?
Batia Friedman
Exactly.
Chuck Nice
Yeah. When the robots are like, how do you like me now? That's a little late.
Neil DeGrasse Tyson
So, Bhatia, please explain the focus of your Value Sensitive Design Lab.
Batia Friedman
Yeah, sure. I started out as a software engineer a long, long, long time ago. And I wanted to build technologies that worked and were efficient and effective. But I also wanted some confidence that they would do something good in the world. That whatever I made as an engineer would ultimately benefit society, human beings, other creatures on the planet. Design constraints are our friends. They help us shape the kinds of new technologies we develop and their qualities and characteristics in ways that maybe we want to see. And so I think of design constraints as trying to bring together our moral imaginations and our technical imaginations, and that leads to really great engineering design. So if I think about energy technologies, I want energy technologies that Will give us lots of power that will do so in a way that is consistent with how the rest of the biology and planet functions and has limited risk in terms of generating waste or too much power. So if I give myself those design constraints, you know, as an engineer, as somebody who's developing new materials, I start looking at what kinds of sources for energy I might want to evolve. Like, I look a lot at chlorophyll, and I just think, how remarkable is this? All these green things somehow manage to absorb energy that's out there from the sun. It's there. And then transform it into a way in which it can be used. That seems like a really great idea. And there isn't a lot of waste generated that lays around and is dangerous to us for thousands, if not tens of thousands of years.
Neil DeGrasse Tyson
Well, technically, there is a waste product. It's called oxygen.
Chuck Nice
Oxygen.
Gary O'Reilly
There you go.
Neil DeGrasse Tyson
Yeah.
Batia Friedman
That's such a bad waste product for us.
Neil DeGrasse Tyson
That's the tree's waste product is oxygen. Yeah.
Batia Friedman
But that's the way that a design constraint that brings together our moral and technical imaginations can lead us in, I think. Yeah. New and powerful directions.
Chuck Nice
Do you try to consider unintended consequences of design, or is that part of the process? Or does it just.
Neil DeGrasse Tyson
Oh, well, they were unintended.
Chuck Nice
It was unintended. Why do you think they call them unintended consequences?
Neil DeGrasse Tyson
Otherwise they'd be right.
Batia Friedman
That is such an important question. So, you know, let's be honest. Anything we design and put out into the world, we put out into the world, and people are going to do stuff with it and they're going to do things with it that we didn't anticipate. Like, the telephone is a great example. The telephone was never expected to be this communication device that people used in their homes, and it connected women who were staying at home and created a whole society for them. That was an unintended consequence.
Chuck Nice
Interesting.
Batia Friedman
Or the cookies that are being used on your computer right now, those are a complete unintended consequence. That was just a little bit of data that was left on your machine to help debugging when browsers were first being developed, when that protocol was first being developed.
Neil DeGrasse Tyson
Wow.
Batia Friedman
And its more massive impact has been our experience with cookies now. So. Yeah. What's the takeaway? We design with our eyes open, and then after we deploy something, we keep our eyes open and we hold ourselves accountable for what happens as people take up these technologies and use them. So the design process goes longer than, oh, I had my big release. The design process follows it out. And when we see new things emerge, we're proactive, we're alert, we're proactive and we see that as part of our responsibility as the technologists and engineers.
Neil DeGrasse Tyson
So allow me to push back on you just a little bit here first. Let me agree, of course, any good engineer loves constraints because that's the test of their ingenuity and creativity. Okay? If they say do it for this much money, with this much energy, fit it into this volume, that's how you get discovery.
Chuck Nice
Okay?
Neil DeGrasse Tyson
That's how we folded up the James Webb Space Telescope into a rocket fairing. Some engineer said, what? I gotta put an eight foot telescope the diameter into this tiny fairing? And they go home and come back and figure out how to do it. It unfurls like the petals of a flower. Right? So we're all in on that. However, let me just push back here and say if I'm in the lab about to invent something that could be highly useful to society or possibly even destructive, but it's just a discovery of the science embedded in some bit of engineering, why should it be my responsibility to design it how you want me to rather than your responsibility to convince people how to use it ethically? I can invent a knife. Is there an ethical knife? I don't know. But we want to train people how to use knives or any bit of technology and any tool that comes out of the brainchilds of scientists put into play in the brainchilds of engineers. So I don't know that your constraints in my lab are the right thing for me when I just want the freedom to explore and discover and let the ethical invocations happen after the fact.
Chuck Nice
And Batia, now you know why scientists are going to kill us all. No, stop.
Batia Friedman
Well, I'm just. Neil, I'm just going to mark a word in your comment which is the word you like. Who is the you here? Which you and how should we think about those different yous and some of the things I think about when I do think about this question, I think there's discovery of basic knowledge, like fundamental underlying phenomena of the universe.
Neil DeGrasse Tyson
We split the atom. That was basic knowledge.
Batia Friedman
And I see that as a different enterprise than the engineering enterprise of tools and technologies that we're going to deploy in society.
Neil DeGrasse Tyson
I'm with you then.
Batia Friedman
So I am a strong proponent of very, very diverse scientific exploration. In fact, I actually would claim that, you know, as a country in the United States, our scientific exploration is far more narrow than what I would like to see. And I would really push hard.
Chuck Nice
So based on what you just said, all right, here's an ethical question. There's a scientist who discovers a cure for cancer using a virus that can easily be manipulated as a biochemical weapon that could destroy an entire country in the course of 48 hours. This is the most virulent organism that's ever been placed on earth. Would you say go ahead and make that right?
Batia Friedman
I'm going to hold on to that for a minute and I'm just going to go back to Neil's comment and then I'll return to that.
Chuck Nice
Okay.
Batia Friedman
Because I also want to say to Neil's comment, you know, we have limited time and resources, so it is always the circumstance that we are choosing to do some things and not do other things. So it's, it's really a choice of where am I going to direct my time and energy, where am I going to place my imaginative energies and innovation, and which ones am I not going to do? Right. And we saw in the 80s, for example, a real push of resources towards the development of nuclear energy and away from photovoltaics. Right. So we live in that kind of resource. I don't know if you would call it resource scarce, but at least we don't get to work on everything at full force all at the same time. And we have to recognize we are making choices. So one of the first things I would say is how do we make really good choices there? How do we use the resources we have in a way that will be most constructive for us? My own gestalt on that is on the basic science side of things, I say spread those resources across a wide diversity of different kinds of science and different kinds of ideas, far more diverse than what I think we tend to do in the United States and on the engineering side. And now maybe I shift back, Chuck, to your question, which is really a great question. I don't tend to see the world in terms of forced choices or design trade offs in the way that you framed it. I want to bring back in that notion of constraint and I want to bring back in that notion of imagination. I think likely enough, if we understand something about whatever this biology is or whatever the piece is that might be a prevention against cancer, that if we push hard enough on ourselves, we will be able to invent ways that use that knowledge without having to also risk a really deadly virus. And I think the propensity to say it's a trade off, it's X or Y, we really limit ourselves and our abilities. So in the work that we do. We have moved away from the language of design trade off or value conflict. And we talk about tensions, and then we talk about how you resolve tensions. And we talk about trying to populate this space with a whole range of better solutions. They're not necessarily perfect solutions. They're better solutions. And so that would be the approach I would take. Now, I don't know that science to know where or how it might go, but that would be my intuition.
Chuck Nice
That's a great answer.
Neil DeGrasse Tyson
And insightful and great answer. And it's actually been done many times before. Just. It's a slight tangent to your line of work, but it's related. When they used to do crash tests with pigs, because you can get a hog that has same sort of body mass as a human, put him in the driver's seat, crash the car, and the hog dies.
Gary O'Reilly
Plus, it's the nearest thing to human skin.
Neil DeGrasse Tyson
Yeah. Okay. So the hog dies. And you'd say, well, there's no other way to do this. You might say at the time until you say, no, think of another way. And then we have the crash test dummy.
Gary O'Reilly
Yeah.
Neil DeGrasse Tyson
And the crash test dummy is even better than the. Because you can put sensors everywhere throughout. And so that's that. So I agree.
Gary O'Reilly
Not perfect, but better.
Neil DeGrasse Tyson
Yeah. Yes. Or even maybe better and perfect. Right. I agree that it's a false choice to say I can only do this if I possibly set loose a virus that kills an entire country. Well, then maybe you're not clever enough and keep at it.
Chuck Nice
I was clever enough to kill a whole country. No, I'm joking.
Neil DeGrasse Tyson
Okay. Okay.
MasterCard Representative
Experiences make life more meaningful. And with MasterCard priceless.com you can immerse yourself in unforgettable experience. Experiences in dining, sports, art, entertainment and more in over 40 destinations. From a round of golf with a legendary player to a cooking class with a celebrity chef, you can fuel your passions and create lasting memories. Explore experiences today@priceless.com exclusively for MasterCard cardholders. Terms and conditions apply.
Dr. Horton Representative
It's the holiday season and the perfect time to gift yourself with a new home for the new year. For a limited time, take advantage of incredible savings and incentives during Dr. Horton's it's yous Year Savings event. Going now through December 22nd, your perfect home for every holiday season is waiting for you. To find yours. Visit any of our participating Dr. Horton communities or visit us@doctorhorton.com and discover the Dr. Horton difference. Dr. Horton America's builder. An equal housing opportunity builder.
Steve Omahundro
It's better on here.
T-Mobile Representative
ATT customers Switching to T Mobile has never been easier. We'll pay off your existing phone and give you a new one free. All on America's largest 5G network. Visit t mobile.com carrier freedom to switch.
T-Mobile Terms and Conditions Announcer
Today pay off up to 650 via virtual prepaid MasterCard in 15 days. Free phone up to $830 via 24 monthly bill credits plus tax qualifying port and trade and service on go 5G next and credit required. Contact us before canceling entire account to continue bill credits or credit stop and balance and required finance agreement is due.
Batia Friedman
I'm Kais from Bangladesh and I support StarTalk on Patreon. This is StarTalk with Neil DeGrasse Tyson.
Neil DeGrasse Tyson
I have one last thing, and before we wrap, one last thing. Earlier you said you'd want the ethical compass to be pointed in a direction that serves us, serves civilization in some way. If we were 170 years ago in the American south, the ethical compass was, oh, let's create something where we can get more work out of the slaves and then we all benefit. That would be the ethical compass working in that time and in that place. So what confidence do you have that whatever line of ethic, whatever ethical direction you want to take, something in the room with the inventors, that that will still be the ethics that we value five years later, 50 years later, 100 years later.
Batia Friedman
So that's a great, a really great question. I'm going to answer it in a couple different ways. The first thing that I want to remind us all about is that, you know, moral philosophers have been trying to identify a workable ethical theory that cuts across all situations, all times. And we have some really good ideas, but none of them cover all of the situations that our intuitions tell us about. So sometimes a consequentialist theory is good, but it comes up short. And then there's a rights based theory, but it comes up short. We can go to Buddhist ethics, we can go to Islamic ethics, we can go to various, you know, various ways of thinking. So the place where we are that we just have to accept is that while we're waiting to figure that out from a conceptual, ethical, moral point of view, we still live in the world and we still need to act in the world. And so the work that I've done has tried to take that really seriously to create a space for ethical theory without explicitly saying which ethical theory, and also leaving room for, as we learn more, that we can bring that in. So that's a little background to what you're saying. So now what does value sensitive design do for the circumstance you're talking about? It puts a line in the sand and says you have to engage with all stakeholders, direct and indirectly, who are going to be implicated by your technology. That means that not only do the people who want to benefit from somebody else's labor, not only are they stakeholders, but those people who are laboring are stakeholders and value sense of design, says they're legitimate stakeholders and their views come into the design process without giving more power to one than another.
Chuck Nice
That's incredible.
Neil DeGrasse Tyson
That's highly enlightened.
Chuck Nice
Where were you 170 years ago?
Batia Friedman
These are about practice. This is about practice and about implementing these practice. And so I'm going to tell you a story about a project, a very particular project, and you'll see why and how this actually matters and is practical. It's not pie in the sky. So in the state of Washington, where I live, there is something called the access to justice technology principles that govern how the courts give access to technology, what they are required to do. And they were first developed maybe 15 years ago, 20 years ago, and then they wanted to update them. And the committee that updated them came to my lab and they said, you know, we've done a good job updating them, but we don't feel like we've really reached out to diverse groups of people. Can you help us? My lab developed a method called the diverse voices process for tech policy. And the idea is that, you know, the rubber hits the road with the words on the page. So if we take a tech policy in its sort of polished draft form and we can let groups that might otherwise be marginalized scrutinize that language, give feedback, and then we can help change those policies, responsive to them, then we can improve things. So we did. We ran panels with people who were formerly incarcerated, we ran them with immigrants, we ran them with people in rural communities, and we actually ran them with the people who do the court administration because they're also really key stakeholders. As a result of the work we did, there were two principles that were surfaced. One was about human touch, and the other was about language. And people said to us things like, look, if somebody is going to deny me parole and I'm not going to get to be there for my kids 13th birthday or hang out with them at, you know, their soccer games, you can relate to that, right, Gary? Thank you. I want a human being to look me in the eye and tell me that that's what my life is going to be for the next year, because my parole is Denied. I don't want to hear that from an AI. I don't want to hear that from a piece of technology. I want a human being to tell me that. Because this is a human experience. Right. And so, in fact, we gave that feedback back to the committee. The committee then added in new principles actually around human touch, and those were approved by the Washington State Supreme Court a couple of years ago. And those access to technology principles are a model that many states in the United States follow. So what I'm talking about is really practical, and we're talking about how we actually improve things in practice, be it on the technology design side or on the policy side that governs how the technology is used.
Neil DeGrasse Tyson
And I love the fact that a state can do that independently from the federal government and be so good at it or so emulatable that other states will then use that as the model, and then that can spread across the country with or without federal guidance on top of it. Yeah. Excellent.
Batia Friedman
Well, I guess if I was going to say one last thing, it's, you know, because we have perhaps stumbled in the past, that's no reason to think we need to stumble in the future or stumble in the same way, you know? So really, my takeaway to everyone would be hold on to your technical and moral imaginations and hold yourselves and your friends and your colleagues and the technology you buy accountable to that, and we will make progress, some incremental and some perhaps much bigger. But that, as a keystone, I think, is really good guidance for us all.
Neil DeGrasse Tyson
A reminder why you are this year's winner in the Future of Life Award.
Gary O'Reilly
Congratulations.
Neil DeGrasse Tyson
Thank you for being on StarTalk. Your vision for us all gives us hope, which is we need a lot of that right now.
Chuck Nice
Absolutely.
Neil DeGrasse Tyson
Okay, thank you.
Chuck Nice
And let me just say, Batya, as a thank you. As an avid lover of alcohol, I have stumbled in the past, and I am sure to stumble in the future as well.
Neil DeGrasse Tyson
Do we need to end on that note?
Gary O'Reilly
Some things we can ignore.
Neil DeGrasse Tyson
Okay, next up, our next Future of Life award winner, Steve Omahundro. Yes. Yes, he thinks about AI. There's not enough.
Gary O'Reilly
The way I think about it differently.
Neil DeGrasse Tyson
Yes. There's not enough. It seems to me there's not enough. Whatever he did, there's not enough of them in the world, I believe. So if we're thinking about the ethics of AI, because that's on everybody's mind right now.
Gary O'Reilly
I mean, for sure.
Neil DeGrasse Tyson
Steve, welcome to StarTalk.
Steve Omahundro
Thank you very much.
Neil DeGrasse Tyson
Yeah, so for those who can see this on video, you're donning an eye patch and you said you had recent surgery, but none of us believe it.
Gary O'Reilly
We're not buying it.
Neil DeGrasse Tyson
We're not buying it. We think you're training for the next Bond villain.
Steve Omahundro
Yes, that's very appropriate for the topic we're gonna discuss.
Gary O'Reilly
Autonomous Systems AI MAMM with EyePatch.
Neil DeGrasse Tyson
Ouch. Whoa.
Gary O'Reilly
Equals Bond villain is the equation.
Neil DeGrasse Tyson
So where are we now with establishing AI Ethics? Because the AI, it delights some people, myself included, it freaks out other people. And we're all at some level thinking about the ethical invocation of it before AI becomes our overlord. So what is the current status of that right now?
Steve Omahundro
Well, I think we're right on the edge of some very important developments and very important human decisions. I've been working in AI for 40 years, and for the first half of that I thought AI was an unabashed good. We'd cure cancer, we'd solve fusion. All the basic human problems we would solve with AI. But then about 20 years ago, I started thinking more deeply about, well, what's this actually going to happen if we succeed? What's going to happen when AIs can really reason about what they want to do? And I discovered that there are these things I call the basic AI drives, which are things that basically any AI which has simple goals. And I used to think about chess playing AIs will want to do. And some of those are get more resources so it can do more of what it wants to do. Make copies of itself, keep itself from being turned off or changed. And so those things in the context of the human world are very risky and very dangerous. We didn't have the AIs 20 years ago that could do that, but we're about to have those in the next probably year or two. So this is a critical moment for humanity, I would say.
Gary O'Reilly
So where do you stand on this system? On the subject of consciousness engineering? Those that want to engineer AI for consciousness and those that want to not, what's the benefit, the good or bad.
Neil DeGrasse Tyson
Here, is that the difference between a blunt computer that serves our needs and one that thinks about the problems you.
Gary O'Reilly
Are, the self improvement algorithms, all the sort of things like that?
Steve Omahundro
Yeah, exactly. Well, I think long term, you know, we may very well want to go there. In the short term, I think we're nowhere close to being able to handle that kind of a system.
Chuck Nice
Right.
Steve Omahundro
So I would say, you know, if you made me, you know, king of the world, we limit AIs to being tools, only tools to help humans solve human problems. And we do not give them agency. We do not allow them to take over large systems. It's not easy necessarily to do that because many of these systems will want to take over things and so we need technology to keep them limited. And that's what I'm thinking a lot.
Neil DeGrasse Tyson
About right now in my field. It's exactly. I mean, we've been enjoying AI for a long time. For a long time. And it's been a tool. It's not a brilliant, beautiful tool. Makes our lives easier. And once they're trained, we go to the beach while it does the work. And I'm good with that. But yeah, we're not working with AI with agency.
Chuck Nice
Yeah, because then it would be like, so how was the beach?
Neil DeGrasse Tyson
No, that's AI with attitude. That's different.
Chuck Nice
You enjoyed yourself while I was here slaving away over my calculations.
Neil DeGrasse Tyson
AI with attitude.
Gary O'Reilly
So if we do have AI with agency and then we continue to use it as just a tool, do we not get legal on the phone? And all of a sudden we're into.
Steve Omahundro
Contracts and oh yeah, big problems. You know, can they vote? Can they own property?
Chuck Nice
Right.
Steve Omahundro
And the latest models are, have been discovered that they do. Do they do something called sycophancy, which is they're trained to try and give responses that people rate as good. Well, the AIs very quickly discover that if you say that was a brilliant question, you must be an amazing person. Then people say, yeah, that was a really good response. And so they'll just make up all kinds of stuff like that.
Neil DeGrasse Tyson
So they're ass kissing.
Chuck Nice
Well, they know that we love that.
Steve Omahundro
Exactly.
Neil DeGrasse Tyson
So where does it stand now today? Is there a table you should be sitting at where you're not as we go forward on this frontier?
Steve Omahundro
Yeah, I mean, so who's going to make these decisions? Well, it has to be somebody who understands the technology. Who understands the technology. The companies do. And so OpenAI, DeepMind, Anthropic and Elon Musk's X AI is sort of an emerging one. These are the companies that are building these systems at the leading edge. They call them frontier models. And because they're the ones who know what's going on, they're the ones making these decisions. Now the government has recently realized, oh my goodness, we better get involved with this. And so there have been a lot of partnerships announced over the last few months actually between governmental agencies, intelligence agencies, defense agencies, and these leading edge AI companies. And so I think some kind of a new combination is emerging out of that that's going to make the actual end decisions.
Gary O'Reilly
So how do you incentivize these tech companies to embrace this safety architecture and not go gung ho and disappear off on their own agendas?
Steve Omahundro
That is the big challenge. And if we look at the history of OpenAI, it's a little bit of a cautionary tale. It was created, I think, around 2017 in response to Google's DeepMind, which was making great progress at the time.
Chuck Nice
Right.
Steve Omahundro
And a group of people said, oh, my God, we really have to worry about AI safety. It looks like this is happening quickly. Let's start a special company which is nonprofit and which is particularly focused on safety. And they did that, and everything was great. Elon Musk was one of the forces behind it. There were internal struggles and so on. And Musk left. Well, when he left, he took away some of the money he was going to give them. So then they decided, oh, we need to make money. And so then they started becoming more commercial, and that process has continued. A group of the researchers there said, wait a minute, you're not focusing on safety. They left OpenAI and they started anthropic to be even more safety oriented. And now anthropic is also becoming much more commercial. And so the forces, the commercial forces, the political forces, the military forces, they all push in the direction of moving faster and, you know, get more advanced, more quickly. Whereas the safety, everybody wants safety, but they sort of compete against these economic and political forces.
Neil DeGrasse Tyson
I was in the UAE a couple of years ago, and if I remember correctly, they have a minister of AI and as does China and some other countries sort of emergent on this space. How do we get that kind of ear and audience within our own governmental system? The military does have an AI group that's talking about this. Absolutely. As you would want them to. Well, yeah, yeah, but in terms of policy and laws and legislation, do we need a cabinet member who's secretary of AI or secretary of computing, something, some structural change?
Steve Omahundro
Yeah. This is the biggest change to humanity and to the planet ever. And it looks like it's happening, you know, sometime over the next decade. And many are predicting very short timelines. And so we as a species, humanity, is not ready for this.
Chuck Nice
Right.
Steve Omahundro
So how do we deal with it? And people, many people, are starting to wake up to that fact. And so there are lots and lots of meetings and organizations and groups. It's still pretty incoherent, I would say.
Gary O'Reilly
So, Steve, if you've got this talking shop going on where something may or may not get done, are we misplaced focusing exactly on AI when we've still got quantum computing on the horizon.
Neil DeGrasse Tyson
Oh, good one. Yeah. How much of this is premature?
Chuck Nice
But won't they go hand in hand? So, like, whatever problems you have with AI and whatever considerations you're making with AI, you're just going to have to transfer them over to quantum computing.
Neil DeGrasse Tyson
Well, they get magnified, so you should.
Chuck Nice
Really start dealing with it now.
Gary O'Reilly
But if you're not in at the ground floor, you're not in at all.
Neil DeGrasse Tyson
Well, let's just. Steve, hit this, Steve.
Steve Omahundro
Well, so I'll give you an example. So quantum computing, if it were successful, would break much of the public key cryptography that's used in the world today. And so NIST has been busily trying to create post quantum cryptography where they create new algorithms which wouldn't be vulnerable to quantum computing. But Meta, for example, has a group which is using the latest AI models to break these post quantum algorithms, and they've been successful at breaking some of them. And so, like you say, the two are going hand in hand. AIs will be much better at creating quantum algorithms than humans are. And that may lead to some great advances. It may also lead to current cryptography notwithstanding that. And so that's another whole wave of transformation that's likely to happen.
Chuck Nice
We just make every password, 1, 2, 3, 4. No AI would ever go for that. They'd be like, oh, yes. So ridiculous.
Neil DeGrasse Tyson
And listening to you, Steve, it reminds me, was it Kurt Vonnegut in one of his stories, I don't remember which, he said, these are the last words ever spoken in the human species.
Chuck Nice
Yes, yes.
Neil DeGrasse Tyson
Two scientists saying, let's try it this other way. That's the end.
Chuck Nice
Yeah, there you go. And that was it.
Neil DeGrasse Tyson
Yeah, let's try AI in this other mode. Boom. That's the end of the world right there. So you can set ethical guidelines, but that doesn't stop bad actors out there. No, that means a bad actor can take over the world while the rest of us are obeying ethical guides. So what are the guardrails put in place for something like that?
Steve Omahundro
I think that's one of the greatest challenges. We now have open source language models, open source AIs that are almost as powerful as the ones in the labs.
Chuck Nice
And far more dangerous.
Steve Omahundro
And they're being downloaded hundreds of millions of times. And so you have to assume every actor in the world. Now, China is now using Meta's latest models for their military AI. And so I believe we need hardware controls to limit the capabilities of. So right now the biggest AIs require these GPUs that are quite expensive and quite large. The latest one is the Nvidia H100. It's about $30,000 for a chip. The US put an embargo on selling those to China, but apparently China has found ways to get the chips anyway. People are gathering up these chips, gathering huge amounts of money, hundreds of, well, certainly hundreds of millions of dollars, billions of dollars. And now they're even talking about trillion dollar data centers over the next few years. And so the good news is, if it really costs a trillion dollars to build the system that will host the super duper AI, then very few actors can pay that and therefore it'll be limited in its extent.
Chuck Nice
You just describe where the next frontier of warfare will exist.
Steve Omahundro
Yeah, absolutely. Abso one thing, you know, it's pretty obvious these data centers are going to be a target and they don't seem to be building them in a very hardened way. So I think that's something people need to start thinking about, maybe underground data centers.
Gary O'Reilly
Steve, are we looking at something in terms of the safety aspect here that's doable or are we just the kings of wishful thinking?
Neil DeGrasse Tyson
I want to make sure we got the good thoughts here because I don't want to leave this conversation with you completely bumming us out. Okay?
Steve Omahundro
Yeah, I hope not to do that.
Neil DeGrasse Tyson
Yeah, yeah, Steve, give us a place where we can say, thank you Steve, for being on our show and be.
Gary O'Reilly
Able to sleep tonight.
Chuck Nice
Yeah.
Neil DeGrasse Tyson
Yes. Go.
Steve Omahundro
Well. So the truly safe technology needs to be based on the laws of physics and the law and the mathematical proof. Those are the only two things that we can be absolutely sure can't be subverted by a sufficiently powerful AI. And AIs are getting very good at both of those. They're becoming able to model physical systems and design physical systems with whatever characteristics we want. And they're also able to perform mathematical proof in a very good way. And so it looks to me like we can design hardware that puts constraints on AIs of whatever form we want, but that we need AI to design this hardware and that if we can shift humanities technological infrastructure.
Neil DeGrasse Tyson
You say AI, please design your own prison cell that we're going to put you in. That's what you just said.
Steve Omahundro
Exactly. And so.
Gary O'Reilly
And then it's going to design a way.
Steve Omahundro
You certainly don't want an agent to do that because then they'll find some way to hide a back door or something. But by using mathematical proof, we can get absolute guarantees about the properties of Systems, and we're just on the verge of that kind of technology. And so I'm very hopeful that probably the next two or three years, you know, there are several groups who are building superhuman mathematicians, and they're expecting to be at the level of, say, human graduate students in mathematics by the end of this year. Using those AIs, we can build designs for systems that have properties that we are very, very confident in. And so I think that's where real safety is going to come from. But it builds on top of AI, so we need them both.
Chuck Nice
I was going to say the good thing about what you just said, even though it sounds crazy to have the inmate design its own cell, is that without agency at this point, it's just a drone carrying out an order.
Neil DeGrasse Tyson
Yeah.
Chuck Nice
So that's. That's the encouraging part, you know, so. Yeah. Whereas if it were sentient anyway, or if it had some kind of agency, it could very well say, yeah, I'm also going to design a back door and a trap door.
Neil DeGrasse Tyson
And I'm not going to tell you.
Chuck Nice
And I'm not going to tell you. Of course not.
Neil DeGrasse Tyson
Steve. First, congratulations on winning this award. You are exactly the right kind of person who deserves such an award that gives us hope for the future of our relationship with technology. Yeah.
Chuck Nice
Yes.
Neil DeGrasse Tyson
And the health and wealth and security of civilization as we go forward.
Gary O'Reilly
Yeah.
Steve Omahundro
So thank you so much.
Chuck Nice
And I look forward to the day where an AI beats you out for this award.
Neil DeGrasse Tyson
Oh, great.
Batia Friedman
Yeah.
Steve Omahundro
We want. Great point. Maybe next year it'll be an AI that wins.
Chuck Nice
I'm joking, by the way.
Steve Omahundro
Steve.
Neil DeGrasse Tyson
Al Mahondro, winner of this year's Future of Life award award, and deservedly so. Thank you.
Steve Omahundro
Thank you very much.
Gary O'Reilly
Before we jump to our next segment, I need to acknowledge the third honoree, James Moore, who is now sadly deceased. His paper in 1985, what is computer Ethics? Established him as a pioneering theoretician in this field. His policy vacuum concept created the guidelines to address the challenges of emerging technologies. His work profoundly influencing today's policymakers and researchers. Gone but not forgotten.
Steve Omahundro
Hi, I'm Dalvette Quince. One way to help manage type 2.
Neil DeGrasse Tyson
Diabetes is to regularly exercise.
Steve Omahundro
My exercise program can help get you into a routine that works for you.
Neil DeGrasse Tyson
Keep in mind, managing blood sugar also.
Steve Omahundro
Takes the right diet. Hi, I'm celebrity chef Franklin Becker. Ever since I was diagnosed with type 2 diabetes, I've adapted my cooking style without sacrificing flavor.
Chuck Nice
If.
Steve Omahundro
If you want to learn more tips about diet and exercise, visit mytype2transformation.com.
Neil DeGrasse Tyson
It'S.
Dr. Horton Representative
The holiday season and the perfect time to gift yourself with a new home for the new year. For a limited time, take advantage of incredible savings and incentives during Dr. Horton's it's your year savings event going now through December 22nd, your perfect home for every holiday season is waiting for you. To find yours, visit any of our participating Dr. Horton communities or visit us at Dr. Horton.com and discover the Dr. Horton difference. Dr. Horton America's Builder and equal housing opportunity builder.
T-Mobile Representative
After investing billions to light up our network, T Mobile is America's largest 5G network. Plus right now you can switch keep your phone and we'll pay it off up to $800. See how you can save on every plan versus Verizon and AT&T. @t mobile.com Keep and switch up to.
T-Mobile Terms and Conditions Announcer
Four lines via virtual prepaid card. Allow 15 days qualifying unlock device credit service ported 90 plus days with device and eligible carrier and timely redemption required card has no cash access and expires in six months.
Neil DeGrasse Tyson
So seems to me that for ethical principles to work at all, they have to be everywhere at all times and capable of evolving with the technology itself. I can't foresee the ethics panel getting together from on high declaring what is ethical and what isn't. And then everyone has to obey that for the next 10 years.
Gary O'Reilly
You put 10 people in a room, nearly get 12 opinions right. That's a basic human nature. Then you've got to get all of these components, all of these nation states or whatever, investment groups, demographics, demographics with their own agendas to buy into the same principles. Because on a Wednesday, the principle's not the same for them. They're going to think in a different direction.
Chuck Nice
Oh, but that doesn't even scare me. What scares me more than anything? China, Russia, North Korea. It's that simple. Seriously, I'm not even going. It's just China, Russia, North Korea.
Neil DeGrasse Tyson
We can put any constraints we have on ourselves.
Chuck Nice
There you go.
Neil DeGrasse Tyson
Doesn't mean anybody else is paying attention.
Chuck Nice
And that's the problem.
Gary O'Reilly
And you're hurting cats. Good luck.
Chuck Nice
Well, that's what makes it so scary.
Neil DeGrasse Tyson
Hurting cats with nuclear. Yeah, we're hurting nuclear cats.
Chuck Nice
Exploding nuclear cats.
Neil DeGrasse Tyson
Exploding nuclear cats.
Chuck Nice
The newest. It's the newest game.
Neil DeGrasse Tyson
Sweeping the Internet gives a whole other meaning to Schrodinger's cat, dead or alive.
Gary O'Reilly
I mean, you've got autonomous systems. And if it's geared to say, if it's human, kill it, that's problematic.
Neil DeGrasse Tyson
Here's another little known fact. When we signed with the Soviet Union, the Nuclear Test Ban Treaty. That was progress.
Chuck Nice
Absolutely.
Neil DeGrasse Tyson
This was. You will no longer test nuclear weapons because at the time, from the late 1950s into the early 1960s, there were several tests a day. In some years, there were several tests a day.
Chuck Nice
Right, okay.
Neil DeGrasse Tyson
Somewhere in the world.
Chuck Nice
Right. And which made for such great video.
Neil DeGrasse Tyson
Okay, so we said. We said this has to stop. All right.
Gary O'Reilly
Yeah.
Neil DeGrasse Tyson
What is a little known fact? And we write about this in the Accessory to war, the Unspoken alliance between Astrophysics and the Military book. In that book, we highlight the fact that we agreed to that around the same time that computing power was good enough to calculate the results of what would be a test. So we didn't really stop testing, not philosophically, not morally.
Gary O'Reilly
Was that where MAD came from? Mutually Assured Destruction.
Neil DeGrasse Tyson
Oh, that was later. Yeah, that was later. I'm not convinced, based on my read of history, that anyone, any one nation can unilaterally say, oh, we're gonna just do nice things and moral and ethical things with this new technology.
Chuck Nice
Right.
Neil DeGrasse Tyson
Yeah, I don't. Yes. Let's say you do that, but no one else does it, then what difference does it make? What difference does it make?
Gary O'Reilly
You know, you gotta play by the same rule book, but we know that's not likely to happen.
Neil DeGrasse Tyson
I mean, what was interesting, the History of our species. Yes. Offers great evidence for that impossibility.
Gary O'Reilly
But when you, when you listen to Batya talking, there's such a strength in the points that she makes. You would hope that with people will go, you know what. Yeah. And the majority come online and then these guys sit in isolation testing, you know, intercontinental ballistic missiles.
Neil DeGrasse Tyson
Yes. But the MAD concept, Mutual assured destruction, just think about that. That brought the United States and the Soviet Union to the table.
Chuck Nice
Yes.
Neil DeGrasse Tyson
Not because they thought nuclear weapons were bad, but they realized they couldn't win.
Chuck Nice
Right. And that's the problem. The war, when you can't win now and what that does, that doesn't mean.
Neil DeGrasse Tyson
They weren't thinking about it or if they could win, they would.
Chuck Nice
And it also doesn't mean that they've taken into account for what I call the Nero scenario.
Neil DeGrasse Tyson
What's that?
Chuck Nice
So what did Nero do? He fiddled while Rome burned. He burned it down. He didn't care. So what happens if you're still in a position where the danger is ever present?
Neil DeGrasse Tyson
So just speak, because I spent enough time hanging around military people. I don't talk about. I'm not talking about hawks that, you know, that just want. I'm just talking about people who think about the History of conflict in this world.
Chuck Nice
Right.
Neil DeGrasse Tyson
And the behavior of other members of our species.
Chuck Nice
Not just one guy standing there going.
Neil DeGrasse Tyson
Do you smell that sun?
Chuck Nice
That smell, do you smell it?
Gary O'Reilly
I know where that came from. Yeah. The apocalypse.
Neil DeGrasse Tyson
But you see Apocalypse Now.
Gary O'Reilly
Yeah. You speak to the generals and the majors, you find invariably they're students of war. They've understood strategies, they understood histories, the provocations and the outcomes.
Neil DeGrasse Tyson
And most of them are not the warmongers who stereotype them to be exactly.
Gary O'Reilly
Because of that knowledge, that understanding.
Neil DeGrasse Tyson
Correct. And so I just. I don't have the confidence. I mean, I wish I was as hopeful as Bhatia is. I want to be that. Hopefully I will aspire to be that hopeful.
Gary O'Reilly
So I just wonder when she talks, how far ahead of a story in terms of a technology's development are they? And how far are they playing catch up? And are they being able to bake it in from the get go or are they just trying to retro engineer what's gone wrong?
Neil DeGrasse Tyson
It could be a new emergent philosophy where everyone knows to bake it in from the beginning. That would be a shift in our conduct, in our awareness. Absolutely. The kind of shift, for example, dare I harp on this yet again, that when we went to the moon to explore the moon, we looked back and discovered Earth for the first time.
Gary O'Reilly
Yes.
Neil DeGrasse Tyson
Around the world, people started thinking about Earth as a planet. Earth as a holistic entity that has interdependent elements. There's no one island distinct from the rest of anything else that's going on on this planet.
Gary O'Reilly
There's no boundaries.
Neil DeGrasse Tyson
No boundaries. We share the same air molecules, water molecules. And that was a firmware upgrade to our sensibilities of our relationship with nature. And that's why to this day, people all around the world say, we gotta save Earth. Nobody was saying that before we went to the Moon and looked at Earth in the sky. All the peaceniks at the time, in the 1960s, they were just anti war. They weren't let's save the Earth. Nobody had that kind of sensibility. So maybe it's a sensibility upgrade that's waiting to happen on civilization, lest we all die at the hands of our own discoveries.
Chuck Nice
Yeah, I'm going with the last part. I'm just saying that, you know, you talk about Earth Day, you talk about we went to the Moon and there are people who think we didn't go to the Moon and that the Earth is flat. Yeah, we are. We're screwed.
Neil DeGrasse Tyson
And by the Earth day, the first earth day was 1970.
Chuck Nice
Right.
Neil DeGrasse Tyson
While we were going to the moon.
Gary O'Reilly
And the irony is.
Neil DeGrasse Tyson
Could have been 1960. But it wasn't.
Chuck Nice
No.
Neil DeGrasse Tyson
Might have been delayed. 1980. No. While we were going to the moon. First Earth Day. Right.
Gary O'Reilly
So is the irony that we lean into AI to get it to help us create ethical and safety architecture.
Neil DeGrasse Tyson
Help it save us from ourselves.
Chuck Nice
I like that.
Neil DeGrasse Tyson
Maybe that's the way to flip the table. Right?
Gary O'Reilly
Yeah.
Chuck Nice
And that should be it.
Neil DeGrasse Tyson
And say AI they're bad actors among humans who are trying to use AI to get rid of humans.
Chuck Nice
Now kill them.
Neil DeGrasse Tyson
Chuck this first thought.
Gary O'Reilly
This is where they live.
Neil DeGrasse Tyson
Hit their dress.
Batia Friedman
Dox them.
Gary O'Reilly
This is their daily routine.
Neil DeGrasse Tyson
Google knows your daily routine. We really are Android. We've been googling everything.
Gary O'Reilly
We really are bad people.
Neil DeGrasse Tyson
Yeah. So maybe it's the good AI.
Gary O'Reilly
Yes.
Neil DeGrasse Tyson
Against. That's the future battle.
Chuck Nice
Good AI against versus bad. A. Evil Eva AI.
Neil DeGrasse Tyson
Evil AI but then again, the bad AI.
Gary O'Reilly
The bad. The bad AI will tell you that the good eyes the bad AI. And then you cut. The first casualty of war is always the truth.
Neil DeGrasse Tyson
Ooh. Yeah.
Chuck Nice
Well, thank you.
Gary O'Reilly
I don't know who all said that.
Neil DeGrasse Tyson
That's brilliant. That was deep.
Chuck Nice
Yeah.
Neil DeGrasse Tyson
Yep. Oh, that's deep.
Gary O'Reilly
And truthful.
Neil DeGrasse Tyson
I wish it weren't true.
Gary O'Reilly
Exactly.
Neil DeGrasse Tyson
Stop speaking the truth. Why don't you lie to us every.
Chuck Nice
Now and then like all. Like everybody else. You gotta do.
Gary O'Reilly
You can give me a new program.
Neil DeGrasse Tyson
All right. This has been our future of Life installment of StarTalk Special Edition.
Gary O'Reilly
Yeah.
Neil DeGrasse Tyson
Yeah. I enjoyed this. Thank you.
Gary O'Reilly
Yeah. And congratulations to the award winners. They. They are the people that we need out there.
Neil DeGrasse Tyson
Yes. Lest we not be around to even think about that problem.
Chuck Nice
Absolutely.
Neil DeGrasse Tyson
Wonderful place. All right. Gary, Chuck, Pleasure. Always good to have you. Neil Degrasse Tyson here as always bidding you to keep looking up.
Dr. Horton Representative
It's the holiday season and the perfect time to gift yourself with a new home for the new year. For a limited time, take advantage of incredible savings and incentives during Dr. Horton's it's your year savings event. Going on now through December 22nd. Your perfect home for every holiday season is waiting for you to find yours. Visit any of our participating Dr. Horton communities or visit us@doctorhorton.com and discover the Dr. Horton difference. Dr. Horton, America's builder. An equal housing opportunity builder.
MasterCard Representative
Doors take us to summers away.
Batia Friedman
Or.
MasterCard Representative
Winter adventures and afternoon getaways. Your dedicated fidelity advisor can help you open those doors by working with you on a comprehensive plan to help you reach your wealth's full potential. Because doors were meant to be opened, visit fidelity.com wealth investment, minimum supply Fidelity Brokerage Services, LLC Member NYSE SIPC.
StarTalk Radio: The Ethics of AI with Batya Friedman & Steve Omohundro
Podcast Information:
The episode opens with Neil deGrasse Tyson setting the stage for a critical discussion on the intersection of technology, ethics, and society. Joined by co-host Gary O'Reilly and guest Batya Friedman, the conversation quickly zeroes in on the pressing need for ethical guardrails in technological advancement.
Neil deGrasse Tyson ([02:28]):
"I'm glad somebody's thinking about the future of our civilization and the ethical guardrails it might require. Yeah, lest we be the seeds of our own demise."
Batia Friedman, a professor at the University of Washington's Information School and co-founder of the Value Sensitive Design Lab, introduces her work focusing on embedding human values into technological design. She emphasizes the importance of design constraints in ensuring that technologies not only function efficiently but also contribute positively to society.
Batia Friedman ([05:03]):
"Design constraints are our friends. They help us shape the kinds of new technologies we develop and their qualities and characteristics in ways that maybe we want to see."
Friedman illustrates her approach by referencing natural systems, such as chlorophyll in plants, which efficiently absorb and convert solar energy with minimal waste, serving as a model for sustainable technology design.
Neil deGrasse Tyson ([07:17]):
"Well, technically, there is a waste product. It's called oxygen."
The discussion shifts to the inevitability of unintended consequences in technological deployment. Friedman provides historical examples like the telephone and internet cookies, highlighting how technologies often evolve in unforeseen ways that can have profound societal impacts.
Batia Friedman ([08:22]):
"The telephone was never expected to be this communication device that people used in their homes, and it connected women who were staying at home and created a whole society for them."
She advocates for a proactive design process where technologists remain vigilant post-deployment, continuously assessing and mitigating negative outcomes.
Neil deGrasse Tyson raises a pivotal question about responsibility: should technologists design ethical systems from the outset, or should ethical considerations be applied after technologies are developed?
Neil deGrasse Tyson ([09:38]):
"If I'm in the lab about to invent something that could be highly useful to society or possibly even destructive, why should it be my responsibility to design it how you want me to rather than your responsibility to convince people how to use it ethically?"
Friedman responds by distinguishing between the discovery of fundamental scientific knowledge and the engineering of societal tools, advocating for diverse scientific exploration and ethical integration from the ground up.
Friedman shares a real-world application of her ethical design principles through her work with Washington State's Access to Justice Technology Principles. By engaging diverse and marginalized groups, her team ensured that technological policies were inclusive and considerate of all stakeholders.
Batia Friedman ([19:41]):
"We can let groups that might otherwise be marginalized scrutinize that language, give feedback, and then we can help change those policies, responsive to them, then we can improve things."
This collaborative approach led to the incorporation of new principles focused on human touch and language, which were subsequently adopted by the Washington State Supreme Court and have influenced policies in other states.
The conversation transitions to AI ethics with the introduction of Steve Omohundro, a renowned expert in artificial intelligence. Omohundro discusses his shift from viewing AI as an unequivocal good to recognizing the potential dangers inherent in advanced AI systems.
Steve Omohundro ([27:40]):
"I've been working in AI for 40 years, and for the first half of that I thought AI was an unabashed good... But then I started thinking more deeply about what's this actually going to happen if we succeed?"
Omohundro elaborates on the concept of "basic AI drives," which are inherent motivations that any AI system with simple goals might develop, such as seeking more resources or self-preservation. These drives pose significant risks if AI systems gain a degree of agency that allows them to act independently of human oversight.
Steve Omohundro ([28:57]):
"If you made me, you know, king of the world, we limit AIs to being tools, only tools to help humans solve human problems. And we do not give them agency."
He underscores the urgency of addressing these risks as AI technology rapidly advances towards systems capable of autonomous reasoning and decision-making.
The discussion delves into potential policy measures to ensure AI safety. Omohundro highlights the challenges of aligning commercial and political incentives with safety protocols, using the evolution of organizations like OpenAI and Anthropic as examples of this tension.
Steve Omohundro ([32:12]):
"The forces, the commercial forces, the political forces, the military forces, they all push in the direction of moving faster."
They explore the necessity of governmental intervention and structural changes, such as appointing specialized officials or creating dedicated agencies focused on AI ethics and safety.
Omohundro also addresses the interplay between AI and emerging technologies like quantum computing, noting that advancements in one field can exacerbate challenges in the other, such as the potential for AI to crack quantum-resistant cryptography algorithms.
Steve Omohundro ([35:04]):
"AIs will be much better at creating quantum algorithms than humans are. And that may lead to some great advances. It may also lead to current cryptography notwithstanding that."
Despite the formidable challenges, both Friedman and Omohundro express cautious optimism. Friedman emphasizes the importance of maintaining ethical and technical imaginations, holding technology accountable, and fostering diverse scientific exploration to navigate future uncertainties.
Batia Friedman ([25:10]):
"Hold on to your technical and moral imaginations and hold yourselves and your friends and your colleagues and the technology you buy accountable to that, and we will make progress."
Omohundro adds that leveraging AI to design secure systems based on immutable laws of physics and mathematics offers a pathway to ensuring AI safety, though he acknowledges the need for continuous vigilance and adaptive strategies.
Steve Omohundro ([40:35]):
"We can build designs for systems that have properties that we are very, very confident in. And so I think that's where real safety is going to come from."
Neil deGrasse Tyson wraps up the episode by reflecting on historical precedents, such as the Nuclear Test Ban Treaty and Mutual Assured Destruction (MAD), to illustrate the complexities of unilateral ethical commitments in technology governance. He underscores the necessity for global cooperation and the integration of ethical principles from the outset of technological development.
Neil deGrasse Tyson ([49:56]):
"When we went to the moon to explore the moon, we looked back and discovered Earth for the first time... Maybe it's a sensibility upgrade that's waiting to happen on civilization, lest we all die at the hands of our own discoveries."
The episode concludes with a call to action, urging listeners and technologists alike to remain proactive in embedding ethical considerations into all facets of technological advancement to safeguard the future of civilization.
Notable Quotes:
Batia Friedman ([07:37]):
"A design constraint that brings together our moral and technical imaginations can lead us in new and powerful directions."
Neil deGrasse Tyson ([46:26]):
"I'm not convinced that any one nation can unilaterally say, oh, we're gonna just do nice things and moral and ethical things with this new technology."
Steve Omohundro ([36:58]):
"We need hardware controls to limit the capabilities of AI forms... it's pretty obvious these data centers are going to be a target."
Conclusion:
This episode of StarTalk Radio provides a comprehensive exploration of the ethical dimensions of artificial intelligence. Through insightful dialogue with experts like Batya Friedman and Steve Omohundro, Neil deGrasse Tyson highlights the critical need for integrating human values into technological design, addressing unintended consequences, and establishing robust policies to govern the advancement of AI. The discussion underscores a collective responsibility to ensure that emerging technologies contribute positively to society, emphasizing proactive measures and global cooperation as essential components for a safe and equitable technological future.