
Loading summary
A
This is the Everyday AI show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life.
B
To do just about anything in the United States, you have to be licensed, right? So, I mean, whether you're talking about a doctor or a pilot or a CPA or a beautician, right? Like, you have to be licensed for so many things. If you are performing services, if you are making decisions that affect other humans, if you have agency, should AI be the same? Should we be licensing AI agents? And what does that even mean? Should we be doing it? Well, I think as we move toward an agentic AI future, this is an important conversation to have. And that's exactly what, what we're going to be doing today on this episode of Everyday AI. What's going on, y'? All? My name is Jordan Wilson. I'm the host and this thing is for you. Everyday AI. It is a daily live stream, podcast and free daily newsletter helping everyday people like you and me not just learn AI, but how we can all actually leverage it and learn from smart people to grow our companies and to grow our careers. If that sounds like what you're trying to do, welcome. You are in the right place. If you haven't already, please go to your everydayai.com, sign up for that free daily newsletter. And we're going to be recapping today's conversation as well as keeping you up to date with everything you need to know in the world of AI. So before we get started and have today's conversation, which I'm extremely excited about, let's first start, as we almost always do, by going over what's happening in the world of AI news. So scientists are calling for an AI human cell. So advances in artificial intelligence and vast experimental data have brought the creation of a virtual human cell within reach, according to researchers from Stanford University. So this breakthrough could revolutionize biology by enabling the simulation and understanding of human biomolecules cells, tissues and eventually organs. The virtual cell could transform research by allowing in silico experiments, reducing the need for in vivo testing and accelerating the development of new therapies and personalized medicine. So the project aims to create a universal cellular representation to predict cellular dynamics and enable cost effective computer experiments. So this effort is already being compared to the Human Genome Project with a timeline of a decade or more for a fully functional AI human cell model. Yeah, wild. All right, next, Haiku From Anthropic Haiku 3.5 is here inside of Claude Chat. So Anthropic has released Claude 3.5, making advanced AI capabilities accessible to both free and paid users in their Claude accounts. So previously, Claude 3.5 Haku was only available via the API, and now it is available on the front end inside of Claude's Chatbot. So the model achieved a notable score of 40% on the software engineering benchmark, surpassing larger models like OpenAI's GPT4O. Claude 3.5 Haiku excels in specialized tasks such as coding and just being super fast. Right. So Claude has their kind of three versions or sizes of models, Haiku, Sonnet and Opus, with Haiku being the smallest. So, you know, not, not really anything going to change here if you're using Claude on the front end. But for developers and businesses already using Anthropic, it's definitely news to be looking at. All right, and then our last piece of AI news. Well, did OpenAI have a two year head start on everyone else? According to Microsoft CEO, they did. So Microsoft CEO Satya Nadella just said this today, emphasized that the unique advantages gained by OpenAI with a two year head start in the AI industry has caused a lot of their growth. So these remarks came as Nadella appeared on the B2G podcast. So Nadella noted that this lead by OpenAI, which began with Mic, Microsoft's early investment in the company in 2019 is unlikely to be repeated with future foundational models. So the release of ChatGPT in 2022 was pivotal, setting off an AI arms race and showcasing the benefits of this strategic early investment. So his remarks highlight the strategic foresight required to leverage early investments in AI, underscoring Microsoft's role in shaping the future of AI technologies. All right, so we're gonna have more on those stories and a whole lot more in our newsletter, so make sure you go check that out. All right, y', all, I am excited to talk today and thank you everyone for joining us live. So everyone, Fred in Chicago and Frank and Sam and Jay. Everyone else, thanks for tuning in. What questions do you have about licensing AI agents? Make sure to get them in now. Maybe we'll have time for them. But I'm excited for today's guest, so help me. Welcome to the show. There we go. We have Dr. Denise Turley, Vice president and technology and educator. Denise, thank you so much for joining the Everyday AI Show.
C
Thank you. Thank you. I'm excited to be here as, as.
B
We talk about AI agents, right? It's been the buzzword of, of 2024, but when we talk about Licensing, like what does that even mean to license an AI agent?
C
Yeah, and of course it's not even a thing yet. Right. But I was at a conference a ago and started engaging in a really fascinating discussion where we were sharing about, you know, where, where we thought the future was going with AI. We did have agentic AI out there then. My team has been building solutions with that for a while, but it wasn't as popular as it's starting to become. And so we started really thinking about how are we safeguarding the population, whether that's users in business, whether that's individuals, whether that's kids. But how are we making sure that there are guardrails around this technology that is super powerful and there's. We're at risk of harm if we don't sort of figure out some way to rein it in. And so we came up with the concept of, well, look, we have to license people when they're in certain careers, financial planners in the medical field, you know, in accountancy. And so why wouldn't we also think about this notion of extending a license to the AI? So there are some people who now have virtual agents and virtual coaches that are sort of extensions of themselves. And so there's a concept where either if I already have a license, that license extends to my agent, right. If I'm a financial planner, or I'm a practitioner or something else. And. Or we just license the AI agent as technology that's actually giving advice and is qualified to do so, has been tested, has been trained and has ongoing tests and compliance so that you can increase trust and also have accountability. Right. Somebody ultimately needs to be accountable, we believe, for any decisions or any advice that these agents are giving.
B
And this is fascinating, right? Like to me this is such a fun topic to jump into because I think the rush over the last, you know, year or two has been one, right? How do we have more autonomous agents? Right? How can we give these, you know, large language model, powered agents, more decision making ability, more agency, more power, Right? But then it's like, well, what about the responsibility, right? With great power comes great responsibility. Where is that responsibility? So something fascinating there, Denise, that you said is, you know, for the humans or for the professionals that already have a licensure for whatever it may be, passing that on to like an actual AI agent, how could that actually work? You know, I know, like, you know, in different parts of the world, it might work differently. In the US how could that ultimately work?
C
Yeah, I mean, and I think some of it might be trial and Error. Right. So, you know, the fact is, all of this is new and we don't know, but what we think is that if I'm already licensed, and let's just say I've, I'm a licensed financial planner, for example, maybe I'm a licensed coach of some sort. And since I've already gone through that training, perhaps there's a framework where I'm allowed to extend that license and to my AI agent, meaning that it's acting on my behalf, it's trained on my knowledge, it's trained on, you know, what my specialty is. And that also means, so that I'm now accountable for everything it does. It means that I've tested it, it means that I've made sure that it's safe. It means that I'm validating responses, I'm going in and checking what it's doing and that ultimately now I am, I'm responsible. But the extension of my personal license as a practitioner can extend to my AI agent.
B
And you know, even for me personally, when I think about this, there's a strong duality to it. Right. Like, part of me is like almost scared, right. To have like licensed AI agents, because that means, oh, they have decision making power, they have responsibility, right. They, they have agency, they're actually making decisions and executing things. And then the other part of me is like super excited, right? Because I'm like, oh, for so many tasks, for so many sectors, that's, that's huge. Right? So how can we as humans. Because this is a bigger, you know, you know, you know, it starts to get into the relationship between humans and AI, but you know, how should we be viewing this? Is this an exciting thing? Is this something we should be looking at with extreme caution? Right. Like, how do we all, how should we be thinking about this concept of licensing AI agents and essentially, you know, kind of rubber stamping them to act like a human.
C
Yeah. So I think it is proceeding with caution for sure. With all of this new technology, it's moving so fast and it's very, very exciting. And listen, I love it. I consider myself sometimes to be in an AI bubble. If somebody says to me that they're not using AI, I'm like, huh? What? How is that possible? Let me show you. But I do think that we have to move forward cautiously because it's moving at such a fast pace that we're struggling to keep up from a safety perspective. And so when you think about agents, agents being able to make decisions or act autonomously, they're already doing it. Right. This isn't actually new. It's been going on now for at least the last six months and it's getting more and more popular. I think Google's just coming out with some new stuff now as well this week. Right. So this is happening. What we're seeing now though is this extension into domains that are newer. When we think about personal, when we think about coaches, when we think about, I mean, listen, just, I think it was a few months ago two women married their AI. Now, it didn't happen in the us it was in a different country. But this extension of AI being able to act on our behalf, to act sometimes in what feels like a human sense, I think it's already happening and it's just continuing.
B
So many great questions already from our audience. So if you do have any live stream friends, get them in. But I think it's important to dive in a little bit deeper of what you just said there, Denise, because the relationship between humans and AI is changing. Right. You, you kind of mentioned, yeah, there's people out there marrying their AIs. There are people out there who are, you know, dating AIs. And you know, even for me it's like, I don't understand that, but just because I don't understand it doesn't mean it's happening. And the same thing with, you know, even our own interactions with the normal, quote unquote AI, you know, or large language models that we're using, you know, you mentioned, you know, Google Gemini just now, they have a live agent that you can interact with, you know, chatgpt just a couple hours ago just released their update to advanced voice mode that brings in live video. So from a business perspective though, what should business leaders be thinking or understanding when it, when it comes to interacting with AI agents like humans? Because I don't know, is it just going to be a bunch of people now in, in cubicles talking to agents and having relationships with, you know, like, like, like you have to manage people. Is that what it's going to be like?
C
I do, I do think that's right. I think we will start seeing where we're going to have, you know, agents as a workforce. Right. And so we'll have to think about how do we manage that, how do we manage the agents, how do we make sure we're reviewing their work, we're making sure that it's accurate and this is happening already. I don't know if you guys are noticing like whenever there are some people now who aren't joining meetings anymore, they're just taking Their, they're just sending their virtual note taker. I mean, with my virtual avatar that I have, I can send it to a meeting and I can have it interact with people in that meeting on my behalf because it's trained on my knowledge, it understands my tone, how I speak, it's trained on my responses. And so imagine a future where I'm not even showing up. Now, I don't know that I personally will love that future. Right. Because, because if, if you invited me to a meeting and you send your avatar, your AI representative, Jordan, I don't know that I'm showing up. I don't know. Right. I might send my avatar to go talk to your avatar. And then, and then where are we? I don't know. It certainly makes for interesting dialogue, but it is absolutely where we are going. As organizations are thinking about this, I think we have to think about the different skill sets that we need, right? As we're hiring people, we need people who can manage now, not necessarily a staff that's just humans. Right. You need to figure out how you also are managing agents and reining them in and making sure you've got safeguards. So I think it's exciting, but I think there's a lot of stuff that we haven't figured out yet. I think personally there's a large number of leaders who aren't given this topic that the amount of attention that it deserves and we've got to talk about it.
B
So something else to talk about is how does this process. Let's just say right now, I know we're talking about a theoretical here, but I think it's important to have this conversation. Let's say agents can get licensed and they can act on behalf of licensed professionals. Right. So giving medical advice, right? Like, I don't know if my primary care doctor had a, had an agent, maybe I wouldn't have to wait like eight months to go see, right? My primary care doctor, maybe I could get in and get questions answered right away. So like I get the benefits, but what dangers are there for licensing AI agents?
C
Yeah. So I think, I think there's a number of things we have to be careful with. So we know, for example, that there's a lot of concern about bias in data. Right. It's. It's a thing. We know that traditionally disadvantaged groups can also often get the short end of the stick when decisions are being made about their health care that's not in their best interest. So we have to be really careful about that. We've got to be careful about Misdiagnosis. Right. The. The AI is absolutely going to get to you quicker, Jordan. It's going to take your call, it's going to interact with you. It's going to be really, really friendly and potentially be exactly what you need in that moment. So if you're looking for empathy because you're, you know, you're having a tough day, you've got. You're dealing with a tremendous amount of pain, I imagine the personality will be able to match you exactly where you are, and it's going to potentially feel a lot more relatable than that doctor who is rushing you out of the office. But in many medical professions right now, when you call the doctor, when you call the advice nurse, they are still typing into a system that. And, you know, entering what your symptoms are and getting back, you know, recommendations and such. And so I think that the challenge is the risks are making sure that we are still providing quality medical service that's accurate and we're doing no harm. So I think the risks are high based on the biases in the data and inaccuracy. Right. We know that the AI hallucinates. We know that it makes stuff up in a way that is absolutely convincing. Right. So we have to figure out where it stops. Maybe it sort of gives advice, it suggests a diagnosis, and then that then goes to a doctor to review and then suggest the prescriptions. Right. We're not suggesting that. You know, I hope we're not getting to a place where AI is able to dispense prescriptions and medicines.
B
Maybe not yet. Well, we'll bring in a. We'll bring in a question then from our audience here, former guest Dr. Harvey Castro, asking who would oversee the licensing process for AI agents. So I know we're just talking about this, like, theoretically, but. Yeah. Should it be a government body? Should it be, you know, like, oh, if it's a cpa, should it be the national organization, whatever the CPA is, like, national organization, like, who should be overseeing this licensing process?
C
Yeah, you know, I think this is a fascinating topic and, and I guess just right off the cuff, I would suggest the same agencies that are overseeing them now. Right. But I do think that, you know, people talk about job loss. This is an area where there's potential job creation, because maybe the ways that we are thinking about overseeing human licenses might need to change a little bit. Right. Because we're developing something new. We've got to have some technical safeguards. We've got to develop ways that we are auditing these systems in a manner that obviously includes technology. So perhaps there's a slightly nuanced skill set that we have to start thinking about and developing.
B
Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on gen AI. Hey, this is Jordan Wilson, host of this very podcast. Companies like Adobe, Microsoft and Nvidia have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use gen AI. So whether you're looking for chat, GPT training for thousands, or just need help building your front end AI strategy, you can partner with us too. Just like some of the biggest companies in the world do. Go to your everydayai.com partner to get in contact with our team or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on GEni. So another, another good question here that I'd like to get to from our audience. So Jay is asking, given how rapidly AI changes, adapts and learns at what, like, like when does it need to be re licensed? Yeah, that's, that's great. Right? Because you even think with AI, AI is making things so much faster for humans. So what about for AI? How like, like would they have to be re licensed once a decade every day? Like what would it be?
C
Yeah, yeah. And, and I think, I don't know that there's a one size fits all response here, but I think it would depend on what that agent is doing. I think it depends on the risk of involved with, you know, the, the, the advice that they're giving. But absolutely, it would have to be relicensed at a minimum annually, I would guess. Right. And again we just, we're just saying that something needs to happen. The details of the wet. Absolutely. At minimum annually. And I think it depends on how often that particular profession is changing. We have to think about how often is that agent updated. Right. We have to talk about regression testing of the agent. So we've got to that some of the same things that we are employing with traditional software development, that we have a life cycle of testing anytime a change is made, anytime an update is made, making sure that you're not breaking something that already existed, there's also something new that we haven't had to think about before, right? And there's this notion now of what happens when the AI gets smarter and smarter and maybe even smarter than humans and goes into this self protection mode that we saw a little bit about earlier this week. Right. And so then if that's the case and we get there, we would have to then start, I think, having more consistent evaluation of its output. And that might be daily, right?
B
This, this is, this is interesting. And I, I, I actually want to go there, right? At first I'm like, ah, let's not go down that rabbit hole. But I'm like, that's a good rabbit hole to go down.
C
That's a hoe. Jordan.
B
I so, you know, a year ago on, on this show, I, I made a bold prediction for 2024. I think this is the only one I made. 24. I think this is the only one that didn't come true. I said that there would be more AI agents in 2024 than humans. We didn't get there, but I think we'll get there. I think we'll get there eventually. Do you see it? Right. Because what you just said there, Denise, I think it's important, right? Because, well, if AI agents are smarter than humans, maybe they should be making a lot of normal decisions that humans would make. However, how does that change the role of humans, right? Is it really just going to be that we're going to be orchestrating and overseeing dozens or hundreds or thousands of agents?
C
Yeah, I mean, listen, we've had some conversations about that recently and it is sort of mind boggling and a little bit scary when I start thinking about what does the future look like and it's probably beyond my lifetime. But yes, I mean, imagine that. I think in the not too distant future many of us are going to have robots in the home, right? I for one, am signing up for somebody to come and do my laundry. I said somebody? I already did it. It's not a somebody, it's a something. But you know, I think that there's the potential for us to really start using more of this technology to help us in our daily lives. And I think that the normal everyday life for us is going to look much different than it does today. And it might even be in ways that we can't or even fathom. I do imagine that I'm going to have a personal assistant AI running around this house doing all the little things I don't want to do and I'm going to interact with them In a conversational way, there's probably going to be some humor and if I have a headache or a bad day, I'm going to want them to empathize with me about that. Does that mean that they become my best friend and that my most favorite companion? I don't know, but I think it's possible.
B
Yeah. Another great, great questions today from the live stream audience. So Nazine, I'll get to the end part of her question. So she's saying, should humans be able to choose who treats them or advises them? Right. And outside of just medically, like in general, should humans have the option, hey, I want to be talking, working, collaborating with someone that's just human versus if everything is more agentic?
C
I think so. I mean, today when you, when you go to the doctor, you have that choice, right? Of even what human, Unless it's, you know, an emergency or something. Oftentimes you get to choose who do you want to have an appointment with. And you do that based on either trust or relationship or credentials. So I do think that, that humans should be able to choose if they want to be treated or engaged with AI.
B
That's a good point. Yeah. It's like when you select your provider or something, you can pick their availability. It's like, you know, this person, two months, this person nine weeks, the AI agent, literally just second. Yeah, right. So, you know, we've, we've covered a lot here, Denise. But you know, I'm curious, even for you personally, would you rather be interfacing with, with humans? Would you rather be interfacing with AI agents that are licensed? Like, how do you even personally look at this in your own interactions?
C
Yeah, you know, so I use AI a lot and I, I think that there is a time and a place for AI, but AI doesn't replace humans. When we were at, we were at a conference a couple of months ago and again, same thing, like human connection is important and AI cannot replace that. So I think that there's going to be a place for AI and it's going to be different based on what your preferences are. Not everybody is as comfortable with AI. I use a electric car, I have a Tesla. I actually enjoy the self, the supervised driving. So it's driving me around everywhere. I love it. Right. Because I'm a technology freak and I love that stuff. My husband, on the other hand, would never use that. He loves the art of driving. It is a passion for him. It is a hobby and he would never allow AI to do that and have that type of control. So I think it's very personal. I think it's based on our comfort levels, and it's also based on trust. And we also always have to have the ability to override and have the human in the loop. So. I love AI to do a lot of the stuff that I don't want to do, like that laundry, Jordan. But there's a lot of things that I'm going to want to just do myself, like interacting with my friends and my family and even cooking. I might want AI to my robot to make dinner one night, but then I also get tremendous pleasure out of pulling together a meal for my family. So how about you? I'm curious how you might think you might use it.
B
I mean, gosh, speaking of laundry, my dryer's been broken. Like. Like, please, just give me anything that can actually dry my clothes. But, you know, no, for. I mean, for me, I'm mixed on it all the time, right? Because I. I literally talk about AI every day. I get to talk to very smart people like you almost every day. So I'm. I'm torn. Part of me is like, yes, we need to be licensing these AI agents because they're being used and deployed anyways, and sometimes the human doesn't even know. So I think it's important. But, you know, then part of me is like, I don't know. Then that sets a slippery slope for even just, you know, what. What human interaction really means.
C
And we have to have responsibility and accountability. Right? We talked about there was that. There's this whole lawsuit going on with character AI. We had devastating consequences there. Not necessarily agentic in that example, but still we've got AI that's. That's interacting with somebody that's given responses and sometimes advice and options and solutions. And so, you know, we've got to just be really careful and make sure that ultimately, you know, we're doing no harm to humans and to society.
B
So, Denise, we've covered a lot in today's conversation. We've talked about this. This ongoing change between, you know, humans and AIs. We've talked about how traditional certification in humans works and if, you know, AI agents should even be licensed. But as we wrap up today's conversation, what's the one most important takeaway that you want people to know about if we need AI agents to be licensed and what that actually means for us in the future?
C
I think it just means, like, it's safety, it's safeguarding, it's safeguarding society. Right. It's just a matter of balancing innovation with safety. Ultimately, we have to keep human humans first.
B
It's a great way to put it, humans first. But yeah, maybe we do need to certify those and license those agents just in case. So Denise, thank you so much for joining the Everyday AI Show. We really appreciate your time and your insights.
C
Thank you so much, Jonah for having me.
B
And hey, as a reminder y', all, that was a lot like we we covered it topically, we went down the rabbit hole and more. A lot of great insights. If you want to know more more, our newsletter is where it's going to happen. So if you haven't already, please go to your everydayai.com if this was helpful. Share it with a friend. Tell something about, tell someone about it. I think this is an important conversation for all business leaders to be having because it does change the future of how we work. Thank you for tuning in. Hope to see you back tomorrow and every day for more Everyday AI. Thanks y'. All.
A
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going For a little more AI magic. Visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.
Podcast: Everyday AI Podcast
Host: Jordan Wilson
Guest: Dr. Denise Turley (Vice President, Technology & Educator)
Date: December 13, 2024
This episode delves into the emerging debate around licensing AI agents—whether AI-powered virtual assistants and decision-makers should be subject to similar certification, safeguarding, and oversight as human professionals. Host Jordan Wilson is joined by Dr. Denise Turley to explore the risks, practicalities, and societal changes such a shift could bring, particularly as agentic AI becomes an increasingly integral part of our work and personal lives.
[05:23]
[09:25]
[08:23]
[12:53]
[15:28]
[17:51]
[20:07]
[22:23]
[25:03]
[27:08]
“If I'm already licensed... there's a framework where I'm allowed to extend that license to my AI agent, meaning that it's acting on my behalf... and that also means, so that I'm now accountable for everything it does.”
– Dr. Denise Turley [08:28]
“We know that the AI hallucinates. We know that it makes stuff up in a way that is absolutely convincing. So we have to figure out where it stops.”
– Dr. Denise Turley [16:38]
"There is a time and a place for AI, but AI doesn't replace humans... Human connection is important and AI cannot replace that."
– Dr. Denise Turley [25:03]
“It's a matter of balancing innovation with safety. Ultimately, we have to keep humans first.”
– Dr. Denise Turley [28:06]
Licensing AI agents is fundamentally about safeguarding society while driving innovation. It means building accountability, trust, and human-centered oversight into an AI-powered future—where both choice and responsibility must remain at the forefront.
For additional insights and resources, subscribe to the Everyday AI Podcast and daily newsletter at youreverydayai.com.