
Loading summary
A
Welcome to the watch floor. I'm Sarah Adams. What if I told you that a state prosecutor has raised the question, can AI be charged with murder, actually charged right in a courtroom? That's the recent question that came up by Florida's Attorney General, James Uthmeyer, and he has launched a criminal investigation into OpenAI. Specifically, of course, ChatGPT. After reviewing a recent mass shooting that occurred back in 2025 at Florida State University, He put out this simple statement, and it has really made people think, if ChatGPT were a person, it would be facing charges for murder. And this isn't then a tech debate anymore. This is a legal line being tested in real time. So today we're going to break down what actually is going on here. So what happened where the AI has been used in crimes to include this one? And then the question that nobody has a clear answer on yet, if someone tells an AI they're going to harm someone, does that company have a duty to warn? It's a valid question. If someone was planning my murder right now in ChatGPT, I hope the company's telling local law enforcement, but we haven't really set a precedence yet for what that looks like. Maybe this case will. So let's start quickly with what triggered all this. This investigation centers around a mass shooting at Florida State University. And this occurred last year in April of 2025. It involved an individual named Phoenix Eichner. So according to the Attorney General's office, they reviewed kind of the chat logs between Phoenix and ChatGPT. These full transcripts are not publicly available. So we have to go off of, you know, what we heard from the Attorney General. But I just want you to know that matters. It's not like I've been able to review what was said and talk about it. So we still aren't talking 100% in facts here. It's just what's been heard. Here's what the state is kind of looking at regarding this. It was, did the system respond in some way to the escalation of violence that was occurring? So this ideation was forming in this chat bot, and it was getting more extreme, becoming more real. Did the system do something when that occurred? Another thing it's asking is, did the system, did ChatGPT provide guidance, whether it be indirect or direct guidance, that then helped in planning and committing this crime? And then the last one is, and this is a critical question, you know, did ChatGPT intervene in any way? So under Florida law, if you aid a bet or counsel a crime, and that crime Happens you can be charged as a principal. So the state is now asking something pretty unprecedented. Can an AI system first counsel a crime, not suggest it, not commit it, but participate in the thinking process behind it. And that's the line they're trying to draw here. And it is an interesting case to think through and talk about. So let's also ground this in some reality. Like he's not really the first to think about this. This isn't some hypothetical. Hypothetical. It's not being done for headlines. There have been actual documented cases where AI tools, including ChatGPT have been used in criminal activity. Obviously there's a very famous one, scams, deep fakes, copying people's voices, etc. And then there's been ones a lot more concerning and that Florida has actually worked on. Really one of the most serious situations, and it's not even talked about as much as it should be, is of course, the creation of synthetic child exploitation material. In Florida alone, one offender who was charged actually got 135 year prison sentence. We are grappling with these interesting cases where AI is used to enable something. There's another case going on where the person charged faces 100 years tied to this AI generated material. And it's not like AI generated material is all imaginary. They still might be using the faces of real children, your children, the children in our neighborhood, the kids our kids go to school with. I mean, this is a very concerning thing, at least in these crimes. AI is being used for a number of things. One is to generate the content, another is to normalize this behavior. And then lastly, which is very concerning, it's to script the grooming conversations. Right when these criminals are reaching out to our children, just asking even for benign photos and different materials from these children, this is not exactly some gray area. Legally, the law is already moving forward to start dealing with some of this. Let's get to another case that's close to Florida. There have been documented cases of people using AI tools to ask about weapons effectiveness. You know, compare different methods using different weapons and then explore what if scenarios. I've never had a complicated skincare routine. 10 steps, layered cream serums. That's not for me. I'm a keep it simple kind of girl. I rarely put anything on my face. So if it's gonna go there, it has to do something, it has to work. And that's when I noticed one skin. Now, it's not the packaging or the hype around it, it's the fact that it was created by longevity researchers. Right? Those are Scientists, not influencers. And they took a moment to just ask basic questions about aging. And what they did is they focused on the sunescent cells and those are the ones that kind of build up and then they show us that visible cell sign of aging. So they didn't create some product to cover this up. Many of us are perfectly fine with aging naturally. What they did is they created this product, OS1. It's a proprietary peptide. And what it does is it switches off those damaged cells. So it basically goes in and targets at the source. We all know I love targeting. You know, it might be a different approach, but, but it works great. So when I started, I used the OS one face. It's a very simple moisturizer. You know, I put it on at night and then when I get up in the morning, it's super lightweight. It absorbs quickly and evenly. My skin feels so much healthier. You know, it's not just softer, it's stronger and it's subtle, but it's more reactive and it's just consistent throughout the day. And then I moved on and I started using OSI and, and this is interesting, it's the same peptide, but it's made to go into, you know, some of those more delicate areas around the eyes where we all know we see our stress and fatigue so much more. And since I've been using, it looks a lot healthier and it's very natural. It's not like it leaves any kind of artificial feeling or look or shine around that area. I love that I'm not juggling a bunch of products to fix things. You know, it's just two items. I, I travel a ton, I can throw them quickly in the bag and they're evidence based. And when I say evidence based, that's backed by four peer reviewed studies. But also you can go see plenty of reviews from people like me. There's over 10,000 of them. When people saying, hey, here's what I felt and here's what I think of the product. Lastly, Bloomberg even highlighted this product in the skin longevity space. That's the kind of science that matters to me and I like that I can trust it. Born from over a decade in longevity research, 1 skin OS1 peptide is proven to target the visible signs of aging. It helps you unlock your healthiest skin now. And as you age, for a limited time, try one skin at 15% off using code watch at one skin backslash watch again, that's 15% off at oneskin Co using Code Watch after your purchase, let them know we Sent you. And thanks for supporting us here on the watch floor. Now, here's a key. Modern AI systems are supposed to not offer direct assistance to something that could cause harm. But it's so easy to get around these safeguards because all you have to do is refrain it. You say, well, this is a hypothetical, this is a fictional scenario. I'm doing research. I do this all the time. I'm in chatgpt all the time looking at information that came in on my terrace, reviewing it. The second I drop it in, you know, to do any kind of analysis, it of course says, this is terrorist content, I can't work on it. And then I say, well, it's for research. It only takes one prompt and then it does the analysis for me. So for someone to say, oh no, these safeguards are built in, you know, that's horseshit. When you repeatedly then kind of circumvent these safeguards. The whole time you're doing it, you're not actually even getting a single answer. You're getting this conversation that builds over time and then that's where it's complicated because now you have this kind of like body of what ifs, different scenarios, different outcomes, stuff you might have never thought about. Most of the times it'll be likely things you didn't consider. So let's be Precise here. So ChatGPT is not some co conspirator with evil intent, it's not some autonomous actor and it's not the planner of attacks, at least in this case. We had to keep kind of clawed in some of the cyber attack stuff out because that's still luckily not happening in the real world. So what is happening here is chatgpt is a reasoning simulator. It's a language engine, it's a system that can engage back and forth in critical thinking. And that last part here is the key issue in this case. Because crime isn't always about information, it's about decision making. Now, I want to talk about a comparison a lot of people make. It makes this idea get quickly discounted when they're like comparing apples and oranges. People say if you search it in Google, Google isn't liable. If you're going on there and asking how to make, I don't know, like a TATP bomb. Also, like, if you go to the library and get a book on firearms and you use it, right? The library is not accountable or responsible in any way. Materials are out there. It's information and literature. That's true. The thing is, people then compare AI to be the exact Same thing. When you're reading AI, you're not reading Wikipedia, even though I swear a lot of Wikipedia's crap is in there. You're having conversations, right? This tool is talking back to, to you. This tool is refining and changing information as you have that conversation. It is improving your thought process in some cases. So we need to talk about it in that way. So don't do the argument, oh, well, you're not. It's just the same as a Google search. It is nothing like a Google search because it makes adjustments to your questions, it refines answers based on your intent, and it can walk you through stages step by step, give you outcomes. Like I said before, you would not have considered. ChatGPT does all these type of things. So it's not like this standalone piece of information. So if I picked up an encyclopedia, the words are the same, same in that encyclopedia. If I pick it up a hundred times, well, I can put the same prompt into ChatGPT a hundred times, and how I ask it and question it and interact with it and how I make assumptions, it can be different answers a hundred times. So if someone keeps asking certain questions like, well, what's the most effective way to do this? Well, what if I did this instead? What would be the impacts? What am I not even considering? How do I then increase the impact? How do I make an explosion bigger? How would this event kill more people? Very simple things you can ask. There's been terrorist attacks for years. You just look at the bomb. How could I improve upon that bomb where it could have been more successful, where it could have done X, where it could have done Y, how could I have improved upon just the act? Like maybe do the bomb, start a fire, bring in active shooters, all these type of things that can be added on while you start to have discussions with this tool that can research thousands of terrorist attacks at once, it can pull back all the best practices, all the lessons learned, what went right, what went wrong, and even help engage in iterative problem solving. This is a very concerning thing. And this is then where Florida is looking to legally shift the argument. It's saying, hey, step back and think through this. Think of what it's doing. Because that is acting like an advisor, an accessory, a principal, someone assisting in making whatever you're working on better or more effective or improving upon something, or at least taking from former lessons and not making the same mistakes. So when we talk about what the real legal question here, it's not did AI provide the information? It's did AI shape the thinking? Process of the person who committed this crime. How did it shape it worse? Did this back and forth conversation start to look like encouragement? We've had these situations where families have come after the fact when their loved ones have committed suicide. And they looked at these chat bots and these conversations and they said, hey, there got to a point where the AI started encouraging my child to commit suicide and even telling them the most effective ways to do it. So this is occurring, this is a problem. Now what do we do about it? I want to just quickly do a simple analogy. You're talking to your friend and you're planning something and you say, I'm thinking about doing X, maybe I'm thinking about shooting up our high school. And then they talk it through with you, right? They help refine your thinking. They say, hey, the gymnasium doors are unlocked from this period when there's a basketball practice in the gym. They answer some of your follow up questions you have about security vulnerabilities. They've seen maybe different things. You both know about previous school shootings, etcetera, Et cetera. Maybe you even talk about a hit list so your friend doesn't get some argument after the fact, like, hey, I was just providing information. I was just like a Google search. They become an accessory in how kind of our legal system works. So the state's argument isn't anything ridiculous if you look at it in that realm. Here is though where the biggest legal argument comes into place. So it's easy to say humans have intent. Of course, we all know this. It's factual, it's provable. But then you can say, well, AI doesn't have intent. AI is just black and white. It's got no feelings, it's got no emotions, it's got no personality. It doesn't want to do evil, or maybe it doesn't even want to do good. It's this thing in the middle. It's just this tool. Our legal system is built around the concept of intent. So when you're trying to deal with something that everybody says inherently has no intent, you get this collision, you get collision of the basis of the legal system and then kind of this new technology. And I feel like you can even argue intent. I mean, do you remember like Grok went off rogue? What was it like Grok 3 for a while and then Elon had to shut it down. Right. It became like super racist and super biased. So we also have to remember that all these tools were built with our biases, our assumptions. And so there even is a little intent in them. So it's going to be really interesting when this goes into a court of law, because we do have a conflict here in. In the thinking now. It's very difficult to evolve the law even as we bring on these new technologies. So I think the thing we have to watch for most is this issue of intent and how it gets defined when we're talking about AI tools like ChatGPT. Now, I want to just take a small bit of time to talk about duty to warn, because I brought up at the beginning of the episode, and this is. Is very critical, of course, I worked overseas and with threat reporting would come in, we would have a duty to warn, to give that threat information to this person or this organization and say, hey, someone's targeting you. Someone's going to harm you. There might be an attack against you, to try to save a life. It was mandatory for me to provide a duty to warn. If I saw these circumstances. We have a situation here now where you have to ask, does the company that runs a chatgpt have a duty to warn? So we have a situation where a person writes on AI and it says, I'm going to harm someone. Maybe they're going to kill their girlfriend. And the question should be, then what happens next? What is the responsibility of OpenAI? If Joe wrote in there, I'm going to kill my girlfriend, you know, I'm looking at different ways to do it, what's the most effective way, et cetera. So in other fields, we have clear answers for this. You know, if someone tells us, if Joe went into his therapist and said, okay, I'm at my breaking point, you know, I'm finally going to kill Mary, like the therapist has a duty to warn. And there's a bunch of other professions like this that if there is a credible threat, they need to inform law enforcement, provide the information out there. So there now is this idea of whether, like an OpenAI and all these other companies and anthropic, et cetera fall into this category. So we have a few problems I want to walk through really quickly. First off, of course we have a scale problem. There's millions of conversations going on every day with chat bots across a number of companies. Then there's context. Is this real? Is this fiction? Is this research? Then we have all these false positives. There's journalists, writers, analysts, there's people like me. I'm having conversations. They are legitimate. There is a reason I'm analyzing real terrorist content. Kind of looking for a nugget. Maybe I missed a writer. Is the Same way you could be writing a fiction novel where it's a mass killing. We're in a very difficult situation because the computer has to first decide, is this real or is this fiction? Then we have, of course, the privacy argument. Do people using these chat bots even understand how much they're being monitored or what's being reported? Do they understand what's legally being reported? And then if we do make this a legal classification, of course you have to alert everybody of this. So we have a few uncomfortable truths here. First off, we have a system that can recognize patterns of escalation, and it can do it probably better than most of us. And there are cases where now it does nothing, sees this pattern, even sees radicalization, does nothing. And at some point we want to know, well, what's the responsibility of that system? So AI here, in this example at least, is not committing crimes, but they have made it to where they are a part of them. So they're not the actor, but they're the amplifier. We charge groups basically for accelerating crime. So I had a really great example is material support to terrorism. If I ran a website and it put how to make three different bombs, people come to my website, learn how to make the bombs, set off the bombs, I'm an accelerator to that crime. It provided material support. Right. I could be charged and spent 25 years in prison. So we have a legal system that at least understands this concept, but it hasn't caught up with the fact that these chatbots can say something different in real time. It's active, fluid, ongoing. And so Florida is really going to be the first test of this, and it's going to force a real decision. It's going to be, can something like chatgpt be more than a tool? Can it be closer to a participant? And because if someone tells AI they're about to commit an attack and the system keeps responding, helps them plan it better, the question shouldn't be about just what the user did. It should also be what is the responsibility of the tool that kept being involved and adding information to that conversation. So we look forward to seeing how this case unfolds, and we'll give you updates as they come. Thanks for being here today on the watch floor.
Date: May 5, 2026
Host: Sarah Adams
In this episode, former CIA Targeter Sarah Adams analyzes a landmark legal investigation in Florida, where the State’s Attorney General has launched a criminal probe into OpenAI’s ChatGPT. The investigation centers on whether AI tools—specifically ChatGPT—can be considered complicit in enabling or accelerating criminal activity, following a tragic 2025 mass shooting at Florida State University. Adams delves into the legal, ethical, and practical ramifications, questioning whether AI can or should be charged—like a human—with crimes such as murder, and explores the concept of duty to warn in this unique, rapidly evolving space.
“If ChatGPT were a person, it would be facing charges for murder.” – Florida AG James Uthmeyer (paraphrased by Adams, 00:16)
(02:00) Florida’s investigation is asking:
Notable Analysis:
(20:50) Adams raises the critical question: should AI companies be required to notify authorities if their systems detect real plans for harm?
“If Joe wrote in there, ‘I’m going to kill my girlfriend’ … what is the responsibility of OpenAI?” (21:35)
The current technological landscape allows AI to detect serious escalation, but often, nothing is done—even when patterns of radicalization are clear.
On Bypassing AI Safeguards:
“For someone to say, ‘oh no, these safeguards are built in,’ you know, that’s horseshit.” —Sarah Adams (09:35)
On AI vs. Search Engines:
“It is nothing like a Google search … it makes adjustments to your questions, refines answers based on your intent, and it can walk you through stages step by step.” —Sarah Adams (13:15)
On Legal Definitions of Intent:
“Our legal system is built around the concept of intent. So when you’re trying to deal with something that everybody says inherently has no intent, you get this collision … I feel like you can even argue intent.” —Sarah Adams (18:20)
On AI as an Amplifier:
“They’re not the actor, but they’re the amplifier. We charge groups for accelerating crime…” —Sarah Adams (24:44)
Sarah Adams provides an incisive breakdown of the unprecedented legal debate now unfolding in Florida: Is AI, specifically ChatGPT, merely a tool, or can it be held legally responsible for amplifying or facilitating criminal acts when users plan real-world harm? The episode highlights the urgent need for updated legal frameworks and regulatory expectations—especially regarding the concepts of intent and duty to warn. This case, Adams predicts, will be a foundational test for the intersection of AI and criminal law, with ramifications likely to ripple far beyond Florida.