
Loading summary
A
Artificial intelligence is changing the insurance game. Let's find out if it's for the better or for the worse with today's very special guest, Amber Moss. Welcome to the Artificial Intelligence Podcast where we make AI simple, practical and accessible for small business owners and leaders. Forget the complicated tech talk or expensive consultants. This is where you'll learn how to implement AI strategies that are easy to understand and can make a big impact for your business. The Artificial Intelligence Podcast is brought to you by Fraction aio, the trusted partner for AI Digital Transformation. At Fraction aio, we help small and medium sized businesses boost revenue by eliminating time wasting non revenue generating tasks that frustrate your team. With our custom AI bots, tools and automations, we make it easy to shift your team's focus to the tasks that matter most, driving growth and results. We guide you through a smooth, seamless transition to AI, ensuring you avoid costly mistakes and invest in the tools that truly deliver value. Don't get left behind. Let Fraction AI help you stay ahead in today's AI driven world. Learn more and get started. Fractionaio.com.
Amber I'm really excited to have you here because one of the areas that our Vision Intelligence is the best at is organizing, sorting and analyzing data. We often think about making videos and all those cool things, but it's not the most useful thing. It's often the biggest value comes from the boring. And when I think of boring I often think of actuarial tables and statistics and that's really where I'm at my weakest. Spreadsheets is my great weakness. But I know that a lot is changing in insurance and how do you see, especially when it comes to risk and risk management. How is AI changing everything?
B
I think it's right now it's slowly coming into our industry and we have to be very strategic and careful on how we're using the data and where we're using the data because as we're working with very sensitive data in my line of work, specifically healthcare, I'm working with medical diagnoses, I'm dealing with claims, I'm dealing with socials. So for me it might not be the best to use AI in a situation where I'm presenting data such as that, but I think it's a lot of our team is able to produce more beautiful presentations for our standard verbiage and risk management lingo quote. So it's helping us to be able to produce better presentations. But on the legal side of that, I think what's going to happen is when we go to court because you do get sued and insurance, we have to say, Judge, I didn't say that. I used artificial intelligence to say that. So we have to be careful on how we are doing our presentations and how we're noting when AI is being used. So that in the court of law it's who said what or where did this come from?
A
That brings me to one of, I think the biggest dangers with AI, which is that it leads us to be too lazy. So AI can do 90% of the work, but we tend to let it just do a hundred percent. And that's where the glitches happen. It's that it didn't first find the first eight times. So I need to really double check it's worth the ninth time. And that's when something slips through the cracks. I see this all the time where people will post something they didn't read.
B
Right.
A
And that's really, I think, the most dangerous thing. And it's, I think that's the biggest problem is that it's so easy. It's easy to slide into that, which is it's probably fine. And that's the thought through your mind.
B
That I'm a current law student and one thing that we're noticing are the AI hallucinations. There's been recent cases where I have it right here, where a lawyer submitted something, a fake.
Trial, it was going to go to trial. He used ChatGPT in a legal briefing. It got sent to the judge, the sightings were correct, it looked legitimate, but it was purely a hallucination. So the legal field is also worried that judges might become obsolete when AI started in 1950. So it's been, it's been a long time coming. I'm sure. I did a lot of research on the founder of AI and all that good stuff. But on the legal side, it's also becoming a problem because if you're not protected properly with cybersecurity, and that's something we offer all of our clients, cybersecurity protection and policies. But there were at least seven cases submitted to a federal court that looked legit, that were not. While there's so many positives, my biggest concern is the legal side of things and how do we control what is being sent to our courts or what is being sent in trial and who said it, where did it come from and what software are you using?
A
Yeah, you bring up something really important, which is that when it comes to the law, you have to be correct 100% of the time, not 90 or 95% of the time. There's a really big difference. And there's this mistaken belief that AI never makes a mistake and that only applies to small categories. It never makes spelling mistakes or grammar mistakes. Everything else, it makes a lot of mistakes. And I saw something as recent as someone was using the Twitter AI and said, when did Elon Musk buy Twitter? And it got the date wrong. Hence if you. That's your master, if you're an AI, you should know when your master birthed you or whatever. So it's if something small like that. And it's very common. I've had many times where it gets something wrong, goes, oh, sorry, you're right. But then if you ask it again later, it's still going to get it wrong. And that's really hard for people to understand that it's an AI, but it's still fallible. Like it doesn't. In our heads, we think either something's perfect as an AI, it's a perfect intelligent machine, or it's a human, but it's really somewhere in between. I think this is a really important area because.
We'Ve really changed the definition of AI, which I think is a big mistake. AI used to mean sentient machine, and it doesn't mean that anymore. Now it's like pretty smart word processor or image maker. And it's. I never thought when I was watching Terminator and 30 years ago that someone would say, oh, if you just have something, you push a button, it makes a picture of a cat riding unicycle. That's also AI. It's like very different. And we've had this mission creep or definition creep that's caused a lot of problems. So for many people, they say we're calling it AI, but it's not. And they say strong AI, weak AI. Now they're saying artificial General intelligence, which is a terrible name. And let's just make a name that doesn't sound very good. Or synthetic intelligence, they'll probably go with next. But I think that this is an important, which is that you still have to double check the work, just like you would for any first year someone out of law school or summer intern. Mistakes happen. And the assumption is that AI never makes mistakes. One of the things that happens is as you work with AI a lot, you start to notice consistency in the mistakes. So there's certain things that ChatGPT does that I can always tell when someone's used it. So I am working on a book project for someone and I can tell when they use ChatGPT on certain chapters. I Can say, oh, you didn't write this. It doesn't sound like you anymore. And it uses certain words we never use. And it's different for every person. But there are certain things. Consistency. Like when it's like in the ever changing digital landscape, whenever I hear the word landscape and then there's a comma, I know it's AI because humans don't say that. It's a weird phrase or pondering. I've never pondered. I've thought so. There's certain giveaways.
B
That's true. True. Certain giveaways. Yep. And that's why, going back to insurance, we have very smart clients. We're working with multimillion dollar companies and CEOs and we have to go in with our best foot forward. And I personally, I use artificial intelligence a lot for my consulting firm. But when it comes to my insurance work, each case is so different. And the way that they fund their plan, the way that you have to speak, I think needs to come from yourself. We are slowly warming up to it. But I will be honest with you, I think it's the last few months is really in my organizations where we're seeing teams going, oh yeah, I used AI and I'm going, well, where did you get it? Where did you get that AI data? What are you using to do it? We need to be consistent as an organization. So that. And we are our clients. We pride ourselves on giving our clients 100% of our best. And so we're very careful on what we use with using AI just because of the critical nature of the data.
A
Yeah, there's a lot of legal implications for AI that people don't pay attention to. So there's these different pieces you have to have. It's. Do you actually have a contract with a company? Do you have a BAA that says that they won't train on your data? Do you have my main project, I work for a company, we have to be HIPAA. It's SOC2 compliant, which means that we have very specific requirements for data and which things we're allowed to use. And so people are like, which AI can we use? I'm like, none of them until we sign a contract. And that's really something that's hard to understand because there's the danger of accidentally, just like you have with insurance, accidentally revealing personal information is a really big deal. And it happens all the time, incidentally. And there's all these ways you don't think about it, which is you're doing a screen recording and you accidentally flash the wrong tab for one second. Yeah, that counts. So I'm really interested in how things are changing going forward. So you mentioned earlier CyberSecurity Insurance is AI changing the approach to that, because now there's this new vector which is that people can use AI to do mimic a voice or mimic a video or do these new phishing attacks or smishing attacks. There's all these different types of attacks. And also you can socially engineer a chatbot. Now you can actually trick a chatbot into revealing information if it's trained on too much information, which is something I struggle to convince my clients. I'm always overly security conscious. I'm like, the chatbot should know anything you don't want everyone to know because someone can always trick it. There's always a better mousetrap. So how do you see insurance changing? Do you think we're going to start seeing artificial intelligence insurance? Because we've seen the way viruses online has worked, changed. It used to be people release viruses just because it was funny to crash computers. And then we've seen mistakes from big companies like CrowdStrike, where they accidentally shut down a large portion of the Internet because they made a mistake with an update. And in between now we're seeing more of these like data hostage attacks where you lock down. It's now we're attacking companies and saying, give me money and I'll give you back your data. What do you see? Yeah, yeah, ransom. That's the word I was looking for, not hostage. Maybe in the future you'll be holding your AI. Yeah, it's still. So what do you think is the next iteration where we're going to start to see we have to have insurance for the AIs you create or insurance against AI tax. Do you think it will change?
B
I do. I'm looking at a policy, you know, right now. And they look at a company's previous cyber incidents, like an extortion, a malware infection. If other, you can put AI if other, if there's something there. Data loss, privacy breaches, ransomware, a denial of an attack, theft of funds, that's another one. AI can trick you into giving funds. So I think that, I mean, I don't write this personally. This is our commercial department writes these policies. But I do think there's room for coverage if, depending on how the AI affects the actual attack of the information. But we pretty much tell every client with asset and employees and that use the Internet so 100% to protect yourself with a policy. It just, it's too scary now. And Another thing that I learned, there's attacks now where you get an attacker that comes in, I know, going off AI, but AI is smart enough to maybe even start some of this. They get in your system for years and they hang out and chill and you never know that they're there. There's. I just did a report on a situation where the attackers were in a company's data for over a year with no knowledge, they had no knowledge that this was happening. I think yes, definitely the AI policy or policies within these cyber insurance policies will probably be its own category. I don't think there's a lot of data, legal data yet on the actual cost or threat that it poses because right now most of us are using it to make our presentations pretty. You're not going to have an actuarial or somebody that's trying to define mortality and things like that. They're not going to use AI for that information. I don't think we're at a point where, at least in insurance, where we have a huge risk because we are very careful on how we're using AI.
A
Do you think that now that everyone's working remotely it's increasing the risk of this type of vector because people aren't in the same room?
B
I do and it's actually I met with my CEO last week and had a conversation with him about. It's come to my attention that a lot of folks are using AI in their daily work. And I actually did a poll, it was about a thousand people. I didn't get a huge response, but it looks about 50, 50 on those that use it daily and those that don't. That's one thing companies have to know which who is using AI in your organization, what are they using it for and what program are they using. You can't just go out in the world and do anything and then that data is available to anybody. So our organization will have one software program that is vetted out and safe and protected and we will have training on how you use AI and what's acceptable and what's not acceptable.
A
And that's one of the big challenges is that AI changes so fast. The tools that are available, the promises that are made. There's a major AI news story every single day. It's my full time job to keep aware of everything. And it's like often I'm just aware of the changes. I haven't had time to read the whole new white paper, the new article every day because so many things happen. And that's where I think we're seeing almost a singularity of things are happening so fast, it's mathematically impossible to keep up. I use an AI to tell me about other AI things. And I think that we're going to start seeing AI people building AIs to protect their infrastructure and AIs to attack infrastructure. And like more and more of that, I think is the future. And I actually think there's going to be a larger shift towards in person. Like, one of the. The way we used to handle the Internet was that you didn't have access to the Internet at your job. There was an intranet. So if you needed a piece of data, they would download it, they would download a website and you could view the website, but you couldn't actually actively exit the building through connection. And then it seems to have disappeared over the past 25 years or so since I worked first started working in IT in 1999. And now it's. We just block a few random websites, but you can go to almost anything. And I think the expectation has changed so much. Like one of the challenges we've had is just convincing everyone that we have to put certain security infrastructure on their work laptops. And if they're using a personal laptop for work, we have to put it on there. We have to secure the endpoint, we have to secure passwords, and we have to. We have a little app on there that tells us when you're not doing it. So I get a text every hour for anyone in the entire infrastructure who's doing that. And then.
It'S really hard to create a culture of security where people realize it's. So once a company enters, like certain compliance rules, then there are rules for how you have to punish people who don't do it. So if you don't update your software for a certain amount of time, you get a warning. You do it again, you get another warning third time. The company has to fire you to maintain its compliance. And that's as we're entering this new phase where we just have to have these really strict rules because there are still people who are falling for old scams like sending money to a prince from Africa or they find a USB stick in the parking lot. It's amazing that these things work even for people who are very switched on. They can just catch you at the right time. So I saw something really interesting happen, that LinkedIn has become a really popular vector for attacks, which is that they'll look at what company you work at and start sending you emails that seem like they're from The CEO of that company. So I've started getting.
B
Yeah. Trying to initiate some type of conversation or in. And it's easy to fall for that. Yeah.
A
You think it. You don't notice because it says it's from the CEO's name unless you look at the frontline. And part of it is that, like all of these inboxes and email servers, like, they claim they have an AI, but they're dumber than ever. Like, they're letting more stuff. Like it should. If the name of the sender and the from address don't match, the email should not come through or should have a flag on it. Like, why is that so hard to mark? It's literally, we still don't have security that I thought we'd have 30 years ago. And what's interesting is that when I. When you change a position, I. This happened to me. I start getting tons of emails which are like, congratulations on your new position. And we're definitely not friends. And when. I can't tell, actually, I can't tell the difference between if it's just really bad sales or an attack. So I can't tell the difference. But I don't want to be friends with you either way.
B
Right.
A
I can't. So one person actually from a large company, because I saw the email, I didn't click anything, but it was like, we're your vendor for this. And I was like, you're definitely not. So I was. I've been working in this role for three months. I don't change my LinkedIn very often. I've been working the job for three months. I know our entire tech stack.
B
Right.
A
I run the dev team. I know all of this stuff. And I was like, we. As I even messaged my lead engineer, I was like, have we ever worked with this company? She was like, absolutely not. So even large companies are trying to use this. I guess it's. I don't know if there's a better word for it other than deception marketing. And it's like.
B
But I think another thing that we have to be aware of is there's things that we can't assume we know. Right. I think. And I'm learning that in law school, and that's when you become weak and you trust something a lot, thinking that, okay, you got this software, we know we're in good hands. No, with this, you never know. You have to always be alert. And like you were saying, you have to continue to understand the software update because more than likely we all are always up for any type of attack these days. And we have to understand that we don't know what we don't know.
A
I used to think, oh, my website's too small. No one would ever attack me. And then I'd put on the software that would alert me every time there's an attack. I had to turn off the alerts. It was too frequent. Every three minutes an attack would come in. And it's. There's just these massive tools that just anytime a new website comes up, they know those have the weakest security. That's the best time to slide something in before someone's got everything installed right while you're setting it all up. And that's the best time to get in before they know what's going on. And I. I'm very security conscious and very paranoid, but it doesn't matter because all you need is one unlucky moment or something weird. And I've had like credit card numbers stolen multiple times.
And it's. I. But every time there's a transaction, I get a text for any of my cards or anything. And so as soon as something weird happens, I'm like, that's definitely not me. And that's. And so we're seeing more and more changes in marketing to adapt to this. And I just wonder if it's ever gonna get easier. I almost feel like we're shifting back to an in person meeting. Like, the only thing you trust is if you meet someone in person, you can look them in the eye because the only way you know you're talking to an actual person.
B
True. Because there's meetings just like this that I'm not who I am and I've heard crazy things. And as a risk manager, I think the best that we can do, Americans need to ensure that if they have a business that they have the cybersecurity coverage everybody needs. Identity theft protection. I've had it for 20 years. I have a million dollars. You need to have those things to protect you. And it makes me feel better that my identity can be stolen and I could file a claim for a million dollars. So I think. And credit monitoring, I think those three things, credit monitoring, ID theft or protection, identity protection, and cybersecurity policies. And then you're backing yourself up with the. As much as you can with that risk. To me, that makes sense. And it's not very expensive. To me, it's worth it just because of the unknowns.
A
So I want to ask about something else. You brought something cool to my mind. I see a lot of commercials now for these services. Where you can pay data brokers to stop selling your data. And I'm, I don't believe it's true because it's. If someone's already stolen your data, you go, I feel like who's. Actually this is my feeling. And I'm not going to name any specific copies, but I feel like it's like they go, we stole your data, but if you pay us, we'll stop selling it. It's like you've already sold it a thousand times. Like it's like it's out.
B
Visas had a breach, Wells Fargo's had a breach, bank of America's had a. My, My identity's probably been stolen six times. And yeah, again the only way that and I have had things I personally, I had somebody without a social try to get an apartment. Oregon, I've never been to Oregon, so I had to fight that. But I had the insurance coverage to fight that for me where it wasn't a big deal. But stealing your data isn't hard anymore. You just have to make sure that risk is protected as best as possible. In my opinion.
A
I think that one of the challenges is that the vector is always what you don't expect. Like the 10 biggest losses in Vegas were the things they didn't expect. Like they used to have insurance in case the tiger would go into the audience, but they didn't have insurance on the tiger attacking the performer. And that costs because you, it's the thing you don't think of that always gets it. And we're seeing now is these different types of attacks. My parents have had different. I actually, my mom sent me money for one of my kids birthdays recently. My sister called me 10 minutes later, was like, did you, did mom send you money or did she just get tricked? And I was like, oh, that makes a lot of sense actually. I was like, no, actually was us. But it was a really good question as I thought about it because it's very common for people to pretend to be you and all these things. And I was doing a video call with my sister so she would know it's me. But it's when she messaged me, I was like, yeah, I'm not offended. It actually totally makes sense. Cause it's probably exactly what people pretend to do. And she gets alerts whenever any of my parents, when my parents any money goes out so that nothing happens. You have to get to that point where there's so many different ways of tricking people and misleading people or man in the middle attacks that like.
And I something That I find interesting is a lot of companies still use non secure security metrics. One of the things I find interesting is that a lot of companies are like, they do the text message, we'll text message you the code. I'm like, that's been non secure for 12 years. That hasn't.
Everyone knows how to do that. And it's like they're still doing it. And like I always have to set up these policies with my work with where I'm like it that it has to be a device, has to be a two FA thing, It has to be the scan, the QR code because the text message one is not that secure.
B
No, you're right.
It'S really scary. That's why with insurance, I joke around with my team. The hardest part of my job is chasing my passwords and authenticators. Because to get in, I've got say 100 websites I work on for carriers and different things and we use everything secure. So I had to go through three different processes to get into the website. But it's worth it for my client. I think the more that you can do to protect your data, the healthier your organization will be as far as cybersecurity.
A
So there's this story from high school. One of my friends was in a band and he goes to the store, he needs to buy drums. And they go, he's got $800. And they go, you can buy a set of $500 drums with cases or a better set of drums for 800 with no cases. And instead I get the cheaper drums because like they're gonna get trashed. And I can't remember which one he bought. But I'm. That's like the important lesson. It's that we often think it'll never happen to me or I'm too small or all of these things. But it's actually most of the attacks are really like mass attacks, as in they just hit every small website, as many as they can. And they're just looking for volume. Cause you only need one person to fall for it. I used to know someone who was like a spam emailer for a living.
B
Wow.
A
And he would just, and he would just send people like the dumbest emails for something you spray in your throat and then you lose weight. And I was like, oh my gosh, who would ever click on that? And he's like, enough people do, right? So it's like that's the problem, is that enough people.
Do that. It's worth it at the volume level. If you email enough people Pretending to be their CEO. Because all the data on LinkedIn is very easy to withdraw.
B
Yes.
A
Very easy to download it.
B
All right.
A
And I saw. So when I changed my position, it said, do you want to post? I was like, no.
Definitely not. I don't want a bunch of those trite messages and I just don't want that on my feed. It's not that big a deal. Especially because I waited really long time. It's still. I started getting messages within minutes. All email, not even through LinkedIn. All these people have automated systems to anytime someone changes a job position and it's if you don't know what's coming. Like I have this saying that if someone sends me a social media message on my birthday, I know we're not friends. So it's like last birthday, I deleted all these LinkedIn. Everyone who gave me a birthday message, I unconnected on LinkedIn because you know who didn't give me a LinkedIn message? My kids, my wife, my sister, my sisters, my parents.
B
Doesn't matter.
A
Yeah, the people that know you in person, like, it would be super weird in my birthday party. And I was like, where's my present? They're like, check your LinkedIn wall.
B
Yeah.
A
What.
I think what.
B
We work fast too nowadays. I know in our industry we're doing 15, 20 things at a time and I think sometimes that will hurt you too. We have to slow down sometimes and the AI is making it. It's making us look smarter, but we're also working even faster. So there's going to be more mistakes. So it's a really, it's a. I think it's a hard area to manage with a big organization, but you have to. And that's. We're taking steps to ensure that everybody is doing the same thing with AI. And I know that's hard to, to manage thoroughly, but that's our goal for sure.
A
Yeah, it's really hard to get everyone on the same page because.
Once they get used, once people develop a habit, they're a chat GPT person. Switching them to a different platform is really hard. Or it's just that you can't use that at work. So like with my new job project, now I have a work laptop that I only use for work and it's not convenient, it's not super convenient to just do that. But I'm like, if I don't do it.
Nobody else will. You have to set the example. There was this famous story of a big video game company that got hacked for billions. They were deleting Players, accounts, all these for seven years, they couldn't figure it out. They mean every single user ever the company changed their password and one person didn't and it was the CTO and that's who they'd hacked. And it's. I was like, don't let that. I don't want, I don't want to be one of those stories. Like I don't want to be the lawyer who submitted the brief, it was all written by AI or the one person who goes, my password would never get. And it's. So I don't have anything that's not a work app on that computer. And just like being that secure and because you have to, it's the only way you can set the tone. And that's one of the big problems I see is a lot of CEOs and C suites go well, everyone else has to be secure but our stuff is fine. Or I don't really know how to use my computer. Like I've definitely known CEOs who. The computer is more of a decoration. Like it's not plugged in. And I guess if it's not plugged in, that's okay. If it's not plugged in, I guess you're okay. But it's that.
B
Yeah, the centerpiece.
A
Yeah. And you don't. And the thing is that now that I have to get an alert every time someone has a non unupdated app on one of their computers, I just notice how fast updates come out and how many security breaches are constantly getting found and all of these things. And it's really very hard to keep up with it. So I think it's very interesting to see how the insurance world is changing. I think it's important for a lot of people to start realizing that it's not the size of your company. There's a lot of misunderstandings as well. It's just like that these different types of insurance are massively expensive. But because it's because the type of attack is the mass attack where they just hit a bunch of people and get a random hit. It actually that means that the odds of you getting hit are very low. So the cost of the premiums are very low. Because it's a statistics thing. It's a lot of spreadsheets and math. But that means that it doesn't have to be expensive. But if you, you're spinning the wheel of fortune and it's like if you hit bankrupt, like you probably won't, but if you do. So it's a very interesting shift to change. So I think this is very valuable for people. I think that starting to understand what's important and to see how there's an intersection between AI and insurance and how insurance people are being very cautious. It's very interesting for people who want to know more about what you do and connect with you online and maybe even buy a little insurance. Where's the best place to connect with you and find out what you're doing?
B
We can go to our website, which is hotchkiss.com personally. My email's on there too. Amber Moss but we have over 200 employees. We've got very strong risk managers. The company's been around over 50 years. We can help you with any questions you may have relating to cybersecurity. Really, any insurance. We do it all. So we're risk managers. But we're happy to have the discussion and help you to become more secure in your business and protect what you worked your life for. Most of us. Because it only takes one big attack that could bring down your entire organization. Which is a really sad thought.
A
It sure is.
B
Yeah.
A
Thanks for ending us on a sad note.
B
It's something that we see, but we're here to help. That's the good news. There are ways to protect yourself and just be alert, be aware and cover your risk as best as you can. That's all we can do. Cover the risk.
A
You're absolutely right. Thank you so much for being here for today's amazing episode.
B
It was fun. I had fun.
A
Bye, everyone.
B
Bye.
A
Thank you for listening to this week's episode of the Artificial Intelligence Podcast. Make sure to subscribe so you never miss another episode. We'll be back next week Monday with more tips and strategies on how to leverage AI to grow your business and achieve better results. In the meantime, if you're curious about how AI can boost your business's revenue, head over to artificial intelligencepod.com forward/calculator Use our AI Revenue Calculator to discover the potential impact AI can have on your bottom line. It's quick, easy, and might just change the way you think about your business. While you're there, catch up on past episodes, leave a review, and check out our socials.
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Host: Jonathan Green (A)
Guest: Amber Moss (B) – Insurance risk manager, law student
Date: May 5, 2025
This episode takes a practical dive into the impact of artificial intelligence (AI) on the insurance industry, focusing particularly on risk management, data security, and emerging legal and cyber threats. Host Jonathan Green and guest Amber Moss—a seasoned risk manager and law student—explore the double-edged sword of AI in insurance, highlighting its benefits, real-world pitfalls, evolving threat vectors, and the importance of vigilance in an era of accelerating technological change.
Data Analysis & Presentations
“A lot is changing in insurance…we have to be very strategic and careful on how we’re using the data and where we’re using the data because…we’re working with very sensitive data.”
Caution with Sensitive Data
Legal Documentation & Accountability
"We have to say, ‘Judge, I didn’t say that. I used artificial intelligence to say that.’ So we have to be careful on how we are doing our presentations and how we’re noting when AI is being used." —Amber Moss [02:38]
Hallucinations Leading to Real-World Consequences
“The sightings were correct, it looked legitimate, but it was purely a hallucination.” [04:00]
Absolute Accuracy is a Must
“When it comes to the law, you have to be correct 100% of the time, not 90 or 95% of the time.” —Jonathan Green [05:09]
Human Supervision Remains Essential
“…AI can do 90% of the work, but we tend to let it just do 100%. And that’s where the glitches happen.” —Jonathan Green [03:03]
Detecting AI-Generated Content
“Whenever I hear the word ‘landscape’ and then there’s a comma, I know it’s AI…” —Jonathan Green [06:48]
AI Changing Cyber Insurance
Insurance policies now evaluate past cyber incidents: extortion, malware, privacy breaches, ransomware, and increasingly “AI-caused” threats.
“AI can trick you into giving funds…There’s room for coverage depending on how AI affects…the actual attack.”
Cyber attackers are more sophisticated; AI can help both defenders and adversaries.
Data Security Complexity
Strict compliance (HIPAA, SOC2, etc.) means companies must carefully vet all AI tools before use, sign contracts (BAAs), and prevent accidental info leakage.
“People are like, which AI can we use? I’m like, none of them until we sign a contract.” —Jonathan Green [09:03]
Accidental breaches can happen easily (e.g., sharing the wrong tab during a screen recording).
Remote Work & Risk
Distributed teams increase “attack surface”; companies need strict policies on permissible AI usage.
Standardizing/training on approved, secure AI software is essential for corporate safety.
Keeping Up with AI Innovation
Rapid AI evolution means new tools—and new threats—emerge daily:
“There’s a major AI news story every single day…It’s mathematically impossible to keep up.” —Jonathan Green [14:46]
Companies use AI to protect, as well as attack, infrastructure.
Stricter Security Cultures
“Once a company enters certain compliance rules, then there are rules for how you have to punish people who don’t do it…you get a warning…third time, the company has to fire you…” —Jonathan Green [16:21]
Classic Scams Still Work
Comprehensive Protections Are Key
“Americans need to ensure that if they have a business that they have the cybersecurity coverage…Identity theft protection…credit monitoring…It’s not very expensive…just because of the unknowns.”
Skepticism Toward Data Broker Opt-Out Services
“If someone’s already stolen your data…you’ve already sold it a thousand times.” [21:36]
Evolving Vectors and the Unknowable
Authentication and Security Fatigue
“The hardest part of my job is chasing my passwords and authenticators…I had to go through three different processes to get into the website. But it’s worth it for my client.” [24:34]
Most Attacks Are Opportunistic
“Most of the attacks are really like mass attacks, as in they just hit every small website, as many as they can…and you only need one person to fall for it.” —Jonathan Green [25:06]
Leadership by Example
“You have to set the example…one person didn’t [update password] and it was the CTO and that’s who they’d hacked.” —Jonathan Green [28:31]
Standardization and Policy Enforcement
Amber Moss on Legal Dangers:
“There were at least seven cases submitted to a federal court that looked legit, that were not…My biggest concern is the legal side of things and how do we control what is being sent to our courts or what is being sent in trial and who said it, where did it come from and what software are you using?” [04:48]
Jonathan Green on AI’s Limitations:
“AI never makes spelling mistakes or grammar mistakes. Everything else, it makes a lot of mistakes.” [05:20]
On Insurance Mindset:
“You have to always be alert…we don’t know what we don’t know.” —Amber Moss [18:47]
On Vigilance:
“I used to think, oh, my website’s too small. No one would ever attack me…Every three minutes an attack would come in…I had to turn off the alerts, it was too frequent.” —Jonathan Green [19:31]
On Leadership Responsibility:
“Don’t let that…I don’t want to be the lawyer who submitted the brief, it was all written by AI or the one person who goes, my password would never get [hacked].” —Jonathan Green [28:31]
| Timestamp | Topic | |--------------|--------------------------------------------------------------| | 01:06–02:38 | Amber Moss on AI’s role in insurance, risk management, data | | 03:03–04:30 | Dangers of over-relying on AI & legal hallucination stories | | 05:09–06:48 | The necessity for 100% accuracy in law & insurance | | 09:03–10:27 | Data security, contracts, compliance, HIPAA/SOC2 challenges | | 11:27–13:40 | Cyber insurance: new risks, possible AI-specific policies | | 13:50–14:46 | Remote work & risk, standardizing AI use in organizations | | 16:21–18:47 | Security cultures, compliance, modern phishing stories | | 19:31–20:38 | Persistent attacks, personal experiences with credit theft | | 20:38–22:48 | Identity theft protection, credit monitoring - essentials | | 24:34–25:06 | Security fatigue, passwords, and real-world discipline | | 28:31–29:30 | Leadership, examples of organizational weakness or breach | | 30:50–31:52 | Amber Moss on connecting with her firm for risk management |
Contact for Help & More Info:
Amber Moss and her team can be reached at hotchkiss.com for insurance and risk management expertise.
“Cover your risk as best as you can. That’s all we can do. Cover the risk.”
— Amber Moss [31:35]