
Episode Summary Cary tackles one of the most common questions he hears in workshops: “Can I trust ChatGPT—or AI in general?” He explains that trust in AI depends on what you’re trusting it for, and that the real key is working with AI...
Loading summary
Kerry Weston
This episode is brought to you by Progressive Insurance. Do you ever think about switching insurance companies to see if you could save some cash? Progressive makes it easy to see if you could save when you bundle your home and auto policies. Try it@progressive.com Progressive Casualty Insurance Company and affiliates. Potential savings will vary. Not available in all states.
Unknown Advertiser
Summer fun goes great with Family Freedom from T Mobile. We'll pay off four phones up to $3200 and give you four free phones all on America's largest 5G network. Visit your local T Mobile location or learn more@t mobile.com FamilyFreedom up to $800 per line via virtual prepaid card typically takes 15 days. Free phones via 24 monthly bill credits with finance agreement eg Apple iPhone 16128 gigabyte $829.99 Eligible trade in eg iPhone 11 Pro for well qualified credits end and balance due. If you pay off early or cancel contact T Mobile.
Kerry Weston
Hey, welcome to the ChatGPT experiment. This is a podcast designed to help curious folks better understand what ChatGPT is and find some ways that it can help in your personal or professional life. Hey, my name is Kerry Weston. I'm your host and I'm glad you're here. Thanks for joining me. Today we're going to talk about trust. Can I trust ChatGPT? Can I trust AI? I can tell you in almost every workshop, seminar, training, get together I have that question comes up in some form or fashion. Can I trust it? How do I know it's real? How do I know it's true? How do I actually use and trust what it's giving me? Well, we're going to look at that a little bit today and spoiling the ending. There's no silver bullet. There's some best practices and there's some ways that you can set yourself up to succeed. But if you've ever been thinking, how do I trust this? Is this right? Is this correct? Is this giving me the truth? Well one, you're not alone and two, let's explore together, shall we? But first let's, let's get into the week that was. Hope you had a good week. Hot as Hades here in Maine. 90 something degrees. My son, I've shared with you before. I think my son and I have a lawn mowing thing. I'm trying to teach him the value of work and getting out. He's got nine lawns or so that we go about and and mow. But you know, these lawns are brown now. It's just dry we haven't seen rain in 10 days, 12 days. And when you've got a when you've got a summer job that depends on grass growing in order to make money and you're 14 year old looking for money to spend on sports cards can be a little bit of a damper. Got to really got another thing to share my I have a daughter at 16 and yesterday as I'm recording this yesterday, all three of my children now are officially making their own money. She started her first job, she's going to be a high school junior and this is her first time working. So I dropped her off at her first day of work yesterday. So officially all three children, big day, big moment in a dad's life. Officially, all three children are earning their own funds. So very, very cool. Hey, I got a couple messages I wanted to share with you. I get some feedback and I really appreciate you sharing what you're doing and and what you're thinking and questions and when folks download the free guides. By the way, chatgpt experiment.com if you're new to the show or haven't been there, I've got some free guides best practices on ChatGPT for beginners. I've got step by step guides to create some custom GPTs. I've also got a page, if you look in the menu under Resources, you'll see a page that has a number of exercises and tips and things that you can follow from writing to marketing to a bunch of stuff to help you use ChatGPT better. That's why we're here, right? I'm just sharing some of the experiences and stuff that I've had. But when folks write to me and share with me what they're doing, I love reading it. I think it's very, very cool. And thanks for doing that. There's two of them in particular this week that I wanted to share. Number one, I got a note from a college professor that shared with me that they're starting a class on ChatGPT for their kids and he wanted to share with me that he's using the the podcast, the topics, the outlines, the themes that's coming from the podcast to create and shape his curriculum. I thought that was really cool and hopefully I get a chance to go down this year and see the kids, might be able to pop in and say hi, but thought it was really cool to hear that someone's creating a curriculum around the things that we talk about here on the show. So that's really neat. Now that was for college kids, literally as Far away from me as you can possibly be on the globe. I got a note from another individual who said that they recognize they've been listening to the show and they recognize that kids now in school and whatnot are being shown and shared how to use AI. But there's a sector that this person noticed that was not able to get that training and that's retirees of some older folks. And so this person is creating a curriculum and again shared that using some of the information, topics, tips and resources from the show to do it, but is creating a program once a week where seniors can come together, learn and share and expand on their curiosity for ChatGPT. So really, really cool to hear from folks that are carrying the torch forward, the curiosity torch forward, right? And creating curriculums and helping others, I think that's just. I don't know if there's a higher compliment, to be very honest with you. I don't know if there's a higher compliment than someone saying the information and topics and whatnot that we're talking about here is going to spur on other activities that one to many impact, right? Taking what we're talking about, the lessons we're learning, the experiences we're having, and then helping others benefit from it. So first of all, thank you for sharing that with me. Really, really meaningful and congratulations and if you ever need any help, let me know. But boy, what a joy it is to hear that someone's inspired enough from what we're doing here to go out and take action and help others learn and be curious as well. So trust. Can we trust ChatGPT? Can we trust AI? I want to share with you that in setting this up, you know, we are learning that AI is more than just talking to Chat GPT and having it created an email or a funny limerick, right? Or an apple pie recipe. You know, AI is everywhere. And every software that's being sold is touting AI. It's in our phones, it's in online shopping, it's online, in our streaming services. It's everywhere, right? And the stakes are growing there. It's no longer just a cool tool or an added bonus to something to enhance an experience. There are entire industries, there are ecosystems and economies built around this. It's everywhere, right? And the real question isn't whether this is good or bad or whether it's going to grow or shrink, but can we trust it, right? That's the human question that's coming up time and time again. I want to share a couple numbers here. There's a company called element and they put out a trust report every year and globally they're sharing in one of their reports that, you know, people's trust, now this is global, but people's trust in technology, you know, is typically fairly high, right? The technology trust and benefit is globally around 75% of however they measure it, right? So 75% of people say they trust technology to, to help them, but only 50% trust AI, right? So there's a huge drop there as we're going. That's globally. Now if you get to the U.S. trust in AI in the U.S. has fallen to less than 35% of people polled in the same surveys. And the takeaway from Edelman, which is a very reputable firm and they've been doing trust reports and meters for a long time. Trust rises when people understand how something works, how it's tested and feels that there is an understanding of what and how. And this report was coming out by saying perhaps the lack of understanding, the lack of awareness, the lack of truly peeling back, you know, the, the curtain to see what's going into it could be impacting how people feel not only about using it, but the output that's coming from it. I pulled up a, pulled up an article from Tech Brew and they had an interesting article called the Black Box Problem. And the back, the black box problem as it applies to this topic was often the tools that we use in AI ChatGPT, being one isn't able to explain how it arrives at an answer. You know, it's like, as they put it, it's like a confident friend who never tells you their sources, they just know the answer, right? And you know, that might be fine for casual uses, but for high stakes things, as we get into analysis and approvals and diagnosis and all the things that we would eventually or if not now, right? Be relying on a tool to expedite and impact our world. We do want to understand and have it explain how it comes to the answer. We just can't trust it at face value, right? And as this article says, when the why is hidden, you know, trust becomes harder to build, especially for life changing decisions. Like we talked about, we started getting into financial and medical and security concerns, right? And let's just take a step back. If we just get out of ChatGPT, AI as a whole, we talk about trust. Think about how you interact with companies, how you interact with situations, right? When you feel that something is hidden, when you feel that something is being pushed but not explained, comfort and confidence goes down, right? And those are two very Very important metrics in building trust. And so, yeah, I think the hidden component of not either explaining how or understanding how is a huge component into the trust factor. McKinsey is another institute that puts out studies, a think tank, if you will. And the path to trustworthy AI was a question that they posed in one of their recent reports. And they again, they bring up transparency, right and privacy. I'll tell you that can I trust it Is a very, very common question. And secondary to that, when we start talking about the issues and areas of concern, is privacy, is what I'm sharing and interacting going to stay with me or is it going to be submitted to the ethos? Right. And even when the answer comes back as no, it stays with you and it's private, you know, do you trust it because you don't understand how. Right. And so what McKinsey is saying is responsible AI, however you define responsible, it does include transparency. They added fairness. And of course they're putting in the word governance. But I don't know that we're at the point of governance yet. I think we're so far in front of this at the moment that governance is. It's hard to do because we don't know. I will explain to you though, in a second that it's not stopping some from putting some laws and actions in place. But I think governance doesn't lead the way. I think governance follows what's happening. But they are saying that businesses that are investing in responsible AI and again, as they're defining responsibles, is transparent. It's communicating, it's. It's bringing in the open, it's sharing, it's having the experiences, it's having the failures. It's sharing those things amongst, you know, they're getting real, as they say, real benefits, faster workflows, stronger trust because customers can see the transparency. But the, I think the biggest hurdle as they've defined it, and I'm going to agree with this because the biggest hurdle that I see when I talk to companies and organizations and even individuals is just the lack of training and know how, how to do it. Right? Because, you know, we define an expert as somebody that spent 10,000 hours doing something and we just don't have those, right? I mean, or we're very limited and things are changing so fast that once you're an expert in something, the rules have already changed or the details have already changed. So we're really just. We're dealing with people who have made more mistakes than others and have become more comfortable in some patterns than Others. But there's an awful lot of people becoming just introduced to the awakening of what AI is, whether it's ChatGPT as a whole or AI as a whole. You know, we're still scratching the surface of people being aware and even understanding what it is and how it works, which is, you know, if you're listening to this show on a regular basis or even if this is your first time, congratulations, because you are. I'm going to easily say, you know, in the, in a 5% bracket of folks that are curious enough to dig in deeper and understand it. Even though if you've been using a tool like ChatGPT for some time, you may feel like this is old hat or it's something that you've gotten used to. Well, I can share with you that 90 plus percent of the people that I talk to in workshops and group seminars and whatnot are just exploring this for the first time, if they've explored it at all. I think it was a few months back on an episode I shared that I did a, a workshop to 200 and 250 business owners. And when I asked how many people have used ChatGPT, very, very few. Like very, very few. 1% maybe raised their hand. You know, we take things for granted if we've been familiar with it, but there's typically in, in organizations and you heard just a couple weeks ago with the Nuclear Institute down in, down in Virginia, the episode of that interview, like it took an internal champion to say, this is going to be important. Now let's get this all the way through our company. And it takes an internal champion, somebody that becomes very, very comfortable with it, to lead the way. And then as you heard in that episode, having the time to come back and report on that and use it and create standards and practices. Right, and protocols inside the company, that's heavy lifting because you've got everything else going on in your too. So the irony is ChatGPT and tools like it are ways in which you become more efficient. But in order to become more efficient, you got to learn it. And like everything else, it takes longer to do it at the beginning. So finding the time to redo and learn and make room for it gets in the way of being efficient. That's the irony here. Right. And so they say half the organizations, I'm going to say it's larger than that, lack the training and know how and how to do it right, or even what right looks like. Yeah. And so putting that in place at an organization, much less yourself, is challenging. You Know, I know I mentioned governance being kind of the last step, if not the first step. And I believe that. I think governance is there to put safety rails around it, but not to dictate how we do and where we go. But in Illinois, for instance, Illinois has put a ban on AI therapists. So they passed what's called the WOPR act, and they're banning AI inside the state of Illinois from acting as a therapist. Right. Lawmakers in Illinois cited risks to vulnerable people. Right. Saying that AI might sound empathetic, but I can't truly understand human emotions. And there's been some anecdotes and stories on how AI is hallucinated and given bad advice. Right. And they've actually putting violations of $10,000 per offense or whatever that making it clear in Illinois that when it comes to sensitive areas like therapy, they only want humans to lead. And this is, you know, this is an area where we're going to see more of this. And this is the back end because most of the rules that we see come into play. Now, having served, I've served in government. I have been an elected city councilor in my town and mayor here a couple times. And I can tell you that most of the governance is reacting to things that are happening. Right. So critical masses will come and they'll learn and they'll have experiences, and then government will soon will oftentimes be called to put greater safeguards around the activities. But in Illinois, therapy is one of the areas that they're saying we want to put a limit on. Right. And I've shared with you. Listen, I'm going to be very frank with you. I have an AI therapy custom GPT. I have a personal counselor custom GPT in chat gp. And I've shared with you on the show that I have that, and I share with you how I use it. And I. And I use it being aware of the limitations that it can predict. I, I don't trust it implicitly. I do trust it for guidance and conversation, but I put my own mental guardrails on what's happening there. And I do like to talk things out and get objective viewpoint. And I think that there's validity there. But certainly there's a threat too, right. That if you don't go in with those proper vanguards, that a tool could be trusted too much. Right. Which is very interesting. And another article I think was on Gizmodo. There's a former Google exec in Mo Gaudat that was predicting a disruption of humanity because of the growing pace of advancements inside AI, that there is going to be an over reliance on answers and an over reliance to things that can't be explained in an over reliance on things that can't be understood in the immediate. As we go from curiosity to mainstream, that rush of capitalism. So we're going to see a lot of companies pushing things that maybe aren't fully understood and people trusting things that aren't fully understood and relying on information that may not be, you know, fully vetted or even real, for that matter. And if you've spent any time on Facebook or social media channels, you can start to see that. You can start to see that humanity will follow or humans will follow based on their own internal compass, even if it's not what we deem to be objective. Right? And so this Google exec was saying for the next decade, right, we're going to suffer from human misuse and poor oversight from what he called evil robots. But that's going down a whole nother, a whole nother pipeline, isn't it? And getting into what's going to be created, these different realities and these different perspectives. Right. But for me, the, the question about can I trust chat GPT? Let's just stay here for a minute. Can I trust ChatGPT comes down to, for me, a very basic question is, are you asking ChatGPT to work with you or are you asking ChatGPT to work for you? And that one word, whether for I think, is a tremendous divide in how the tool gets valued, how the tool gets executed, appreciated. Understood. And I've shared with you in other episodes that I look at ChatGPT as an amazing intern. I use the AI component as an amazing intern. It's highly capable and it's very educated and it's very fast at times. Yeah. And it has a wide assortment of skill sets and information that you can use to expedite and not replace. I think is important. And that's where the with and not for comes into play. I've shared with you before that I do believe that in most of the projects and things that we do and work, there's three phases. There's that beginning planning stage, the outlining stage, understanding what we're doing and why. That would be the first step. Right. The ideation and planning. Then there's the middle. That second phase I call the busy middle. And this is where a lot of the work, this is where we get exhausted. We put all of our mental energy, all of our time and sometimes our bandwidth to move to step three, which I call Finishing polishing. This is the expert phase, where you put your experience, your insight, your expertise, right? This is where you bring you, but you can't bring you until you get through that busy middle. And that busy middle is again where most of our time gets eaten up. Researching, drafting, data crunching, first pass creation, all that kind of thing. And a tool like ChatGPT will help compress that busy middle so we can spend more time verifying, refining, adding your human insight. Right? If you look at ChatGPT as a shortcut to get through the busy middle and not a replacement of all three phases as a whole, specifically the third one, which is where you put your verification, your expertise, your insight, your value. You know, there's an old Reagan line from back in when he was in the White House, trust but verify. I use it quite often. This is an eager intern and it always returns with an answer. I've never had ChatGPT say I don't know. But that answer isn't always perfection and that answer isn't always right. So it's up to us to review and fact check and edit. And like you heard in the interview with the the national the Nuclear Resource center there a couple weeks ago, verification is just part of their daily protocol anyway. Regardless of whether something was created or produced by a human or a computer, what ChatGPT and tools like it allow them do is get to that verification faster. The busy middle is shrunk by using a tool like ChatGPT or AI, others like Claude and NotebookLM you may be playing with, but we can't lose that third phase. We can't lose finishing polishing human insight. We can't lose what you bring to the table. That's that black box that the Google executive is talking about. When you take my three phases, the ideation, the busy middle, and the finishing polishing. When you remove the finishing and polishing because you no longer think it's needed, that's when we creep closer to that situation that he was outlining that we're just trusting blindly because it's coming from a source that we think is right. So can we trust ChatGPT? Sure. What are we trusting it for? Right. That's my question. What are we trusting it for? Right. And a couple closing thoughts here is if you are using it by yourself, go through the process of asking ChatGPT questions, giving it backgrounds, giving as much information as possible. That amazing intern, tell it what you're doing, tell it why you're doing it, tell it what success looks like, and then ask it. Do you understand what I need and do you have any questions? Because I want you to do your best work. That's one way of keeping the quality high and keep the AI working with you. Right? As a tool to expedite the busy middle. If you are a company or in a company, we cannot be pushing down best practices and thou shalts without getting folks comfortable and confident that they understand first and foremost what this thing is, what it can do, what it should do and how to use it. When we skip, let's call the busy middle in this kind of phase, if we skip our governance phase in our company, if we skip the communication phase, if we skip the training phase, if we take for granted that everyone understands at the same pace and at the same level what this thing is, how it should be used and what it can do for us. If we take that for granted, we are going to lose the collective impact it can have. Because when we assume, we all know what happens. So we do have to take that busy middle. We do have to take the approach that we can't take for granted that someone understands the same as we, if we haven't communicated it or shared it. And that's where your curiosity comes into play. It's good to be curious, it's good to dabble. You can't break a tool like ChatGPT. You can only learn from it. But I'm going to close by saying, can we trust ChatGPT? And the answer for me is going to be trusted to do what? Answer that question before we move forward. Because we can't assume that everyone has the same answer, we can't assume that everyone has the same approach, and we can't assume that everyone has the same value on what's coming out and what they're getting from it. Trust it to do what? I think if we always start there, if we trust but verify and have a tool like ChatGPT work with us enough for us, we're going to be better prepared for whatever comes next, which includes, by the way, updates and changes to the tool that you're using. ChatGPT 5.0 came out this week and I've been playing with it and it's supposed to be getting smarter and faster and more accurate. Do you believe it? I don't know. Trust it to do what I'm sharing with you, that the more you to focus on the fundamentals that I've been sharing with you, talking with it, having a conversation, giving it background, telling you what success looks like, treating it like the intern, asking it questions, asking it does it understand the better off you are going to be using the tool, whatever version we're talking about, and the more likely you are going to get feedback, results, success from this tool or others that you can trust. So the answer to the question can I trust ChatGPT? I hope you would say it with me. Trust it to do what? Am I going to work with you or work for you? Right? As always though, your curiosity is going to be the most important component of any answer that you give it. So again, I'm GLAD you're here. Chatgptexperiment.com has a number of resources, guides, trainings, workshops, one on one. Check it out. And again as I opened if you're doing something cool with it, please share. You can contact me through the website. I love hearing what you're doing. I love how people are expanding their own curiosity and like we shared, taking that and expanding the impact onto others. Really, really cool to hear what's going on. So I hope this was helpful. Can we trust it, trust it to do what is the answer? Thanks for joining me and as always we will talk soon. Until then, do stay curious. Okay.
Unknown Advertiser
Marketing is hard, but I'll tell you a little secret. It doesn't have to be. Let me point something out. You're listening to a podcast right now and it's great. You love the host. You seek it out and download it. You listen to it while driving, working out, cooking, even going to the bathroom. Podcasts are a pretty close companion. And this is a podcast ad. Did I get your attention? You can reach great listeners like yourself with podcast advertising for from Libsyn Ads, choose from hundreds of top podcasts offering host endorsements or run a pre produced ad like this one across thousands of shows. To reach your target audience in their favorite podcasts with Libsyn Ads, go to libsynads. Com. That's L I B S Y N Ads Com Today.
Podcast Summary: Ep 81 - Can I Trust ChatGPT?
Podcast Information:
In Episode 81 of The ChatGPT Experiment, host Cary Weston delves into a pressing question on many listeners' minds: "Can I trust ChatGPT?" The episode explores the nuances of trust in AI, the factors influencing public perception, and practical strategies for effectively integrating ChatGPT into personal and professional workflows.
Cary opens the episode by sharing personal anecdotes, highlighting significant milestones in his family’s lives. He mentions his three children all embarking on their first jobs, emphasizing the importance of teaching work ethic and the value of earning money. These personal touches set a relatable tone for the discussions that follow.
He then transitions to sharing enthusiastic feedback from listeners:
Cary expresses heartfelt gratitude, stating, "It's a higher compliment than anything else," reflecting on the meaningful influence the podcast has on its audience.
Cary presents compelling statistics to frame the discussion:
These figures underscore a significant disparity in trust levels between general technology and AI specifically.
Cary identifies transparency and understanding as pivotal in building trust:
Further discussing trust, Cary highlights concerns about privacy:
Cary examines real-world responses to AI, using Illinois as a case study:
Cary shares his nuanced perspective on using ChatGPT:
Addressing broader concerns, Cary references a former Google executive who warns of potential over-reliance on AI:
Cary offers actionable advice for individuals and organizations aiming to harness ChatGPT effectively:
Cary wraps up the episode by reiterating the core message:
He encourages listeners to explore the resources available at chatgptexperiment.com and to share their experiences, fostering a community of continuous learning and mutual support.
“Can we trust ChatGPT? Trust it to do what? Start by asking that question and build from there.”
Notable Quotes:
Resources Mentioned:
Final Thoughts: Episode 81 of The ChatGPT Experiment offers a balanced and insightful examination of trust in AI. Cary Weston effectively navigates personal stories, listener feedback, expert opinions, and practical advice to provide a comprehensive understanding of the factors influencing trust in ChatGPT. The episode underscores the importance of transparency, continuous learning, and deliberate use to harness AI’s potential while mitigating its risks.
Stay Curious!