Loading summary
Beau Friedlander
In September 2023, one of the most tightly controlled environments on earth stopped working. It wasn't a power outage. We're not talking about a power plant either. It wasn't a fire. It was a cyber attack. Now, what's so secure, right? What's the most secure place on earth that's not a bank?
Podcast Announcer
Las Vegas is still under a cyber attack.
Beau Friedlander
This is day six.
Charlotte Jupp
And here is everything you need to know and are not being told.
Beau Friedlander
Guests were locked out of their rooms. Hotel key cards failed. Slot machines went dark.
Podcast Announcer
Systems have still not recovered at MGM.
Charlotte Jupp
Properties in Las Vegas. But the hotels are busier than ever.
Beau Friedlander
If you're anything like me, the first thing I think about when a casino is attacked is Ocean's Eleven. But it was the usual suspects. It was a cyber attack. We've seen other large scale attacks tonight. Riviera Beach, Florida is the latest city to pay hackers who took over its computer system.
Podcast Announcer
A warning tonight from federal investigators that hospitals are being targeted by hackers launching ransomware attacks.
Charlotte Jupp
Attack.
Podcast Announcer
More than three weeks since MGM Resorts was hit with that massive cyber attack.
Beau Friedlander
Now, as the company works to return to normal, it sent more information out today explaining what happened in detailing the steps it's taking to get back to normal. So we know this is the new normal. What should we be worried about? Well, we should be worried about our data being out there because not just our data, my data, your data, everyone's data, because everything can be used. I'm Beau Friedlander and this is what the hack, the show that asked in a world where your data is everywhere, how do you stay safe online?
Charlotte Jupp
You see the cases all the time. I think the big MGM was a few years ago in Las Vegas.
Beau Friedlander
This is Charlotte Jupp.
Charlotte Jupp
I had a friend who was staying at the time. He couldn't watch the television in this room because it's all connected to the Internet.
Beau Friedlander
Charlotte knows a lot about, well, this story because she had a friend who was involved in it. She knows a lot about cyber. But let's stick with the story.
Charlotte Jupp
He couldn't get in the door because it's key cards, like everything, they couldn't take payment. And the ramifications and implications are so big in terms of what it can do to your business that everyone can become a target.
Beau Friedlander
Now, Charlotte Jupp is the VP of Customer Success at Outthink, an AI driven cybersecurity company focused on human risk management. And we're going to be talking about human risk today. So with that in mind, what is cyber risk management and human Cyber risk management, in plain English, that's great questions.
Charlotte Jupp
Cyber risk management would be from a business perspective, understanding where you have different cyber risks within your business and taking different steps to try and reduce those, with the overall aim of being stopping a breach from happening. Between 60% and 90% of breaches start through the human element. It's unfortunately a poor person who usually lets the attacker in to your network. Whether that's your own private account being an individual or whether it's being into a business, it would be someone perhaps clicking on a phishing link or giving away credentials, falling for something like a deep fake scam.
Beau Friedlander
She works with global organizations and CISOs to understand how real people interact with real security systems. Not, you know, pie in the sky, hypothetical perfect things, but what we actually encounter in real life. And how do we reduce risk without blaming employees. This is all about creating a stronger stack. What's a stack? A stack, Think of it as like Swiss cheese. One slice of Swiss cheese, you can see right through at least parts of it, right? Put another piece of cheese in front of it and turn it and you will be able to see through less of it. And if you keep stacking those slices of cheese, you will eventually have something you can't see through. That's the goal. Now the goal is a noble one and it's important. But the fact is the way that most compromises still occur.
Charlotte Jupp
Not most.
Beau Friedlander
Well, most actually is through a fallible human being. So that's why we talk about human risk today and what it means.
Charlotte Jupp
And so it's really important to understand what the susceptibilities are for you as a person, what you're susceptible to, and trying to take steps to mitigate those susceptibilities.
Beau Friedlander
MGM isn't just a hotel operator, it's a casino company. Casinos are a very particular kind of business, right? They're not just about hospitality. A ton of places do that. They're about continuous regulated financial transactions. They're 24 hour banks. Every slot machine pull, tap button, push every table bet, every loyalty card swipe, every computer is tracked in real time. Identity, access, money are all tightly linked. And they're all supposed to work without interruption. And by the way, those cameras that are everywhere, they're not just to see if you're cheating. It's also, you gotta think that there's some facial recognition happening there. Now you gotta think doesn't mean you gotta think. But I gotta think. That's what I think. I asked how long is this gonna be? This gonna be One day, it could be three weeks.
Charlotte Jupp
I don't even want to pay with my card right now. I'm scared that they're gonna hack all of our information.
Beau Friedlander
When those systems. It's not just an inconvenience for guests. It creates immediate operational, regulatory and financial problems. They operate under strict gaming regulations and they rely on precise accounting to prove that games are fair, payouts are accurate, and money is moving the way it's supposed to. You know, which is why I talked about Ocean11 earlier. But there are other ways to hurt a casino than breaking into a vault. And the MGM compromise is a great example because they lost an estimated $100 million as a result of their incident. Largely as a result of business interruption. What this incident exposed wasn't just how a breach can happen, but how fragile even highly controlled environments can be when identity and access systems fail, as they will.
Charlotte Jupp
And I think there are certain targets who perhaps would be targeted. First. Charlotte jump where it's easier access to some of the data that you might be wanting to. Like the VPs, the attackers didn't start.
Beau Friedlander
By going after MGM CEO, right. That would be a waste of time or a lot of time invested for questionable results. They didn't try to guess a master password or break into a single high value account right away. Instead, they went after something far more accessible.
Charlotte Jupp
Anyone could be a target because you can start knowing that this person is connected to this person or this person is connected to this person. You can see that on LinkedIn. And if you want to make that one jump to try and get through someone to get to someone else compromising their account and using that account to then maybe get to the more sensitive data or the more sensitive person that you are.
Chris Tarbell
Access to a group of teenagers hacking casino.
Beau Friedlander
This group, they didn't start with casinos.
Chris Tarbell
Chris Tarbell is a former FBI agent famous for busting the Silk Road Dark web exchange.
Beau Friedlander
According to cnn, they cut their teeth on something called SIM swapping, which we've talked a lot about on this show.
Chris Tarbell
And that's when you try to duplicate someone's phone, which requires a social engineering of a person working at a cell phone store who has a great power access to all your records. And so they can convince that person that you lost your cell phone. And I need a new SIM to put in my phone. And then I have complete control of my target's phone. I have a duplicate copy of it. You know, a lot of social energy comes down to pressure and putting pressure on people. You know, oh, I really need to get this Done. It's Friday afternoon. I gotta get it to get outta here. I'm trying to get home.
Beau Friedlander
And then they set their sights on something bigger.
Chris Tarbell
So it sounds like they found maybe a number of victims through LinkedIn. They looked on LinkedIn and said, hey, this guy works at MGM, he's in the technical side. Which makes sense because, you know, if I'm looking to hack into a network, I'm going to go after. I want a technical end. I want to, I want a username and password and a two factor authentication of someone who has, you know, power within the system.
Beau Friedlander
Okay? Within minutes of the call, the real employee received a notification that their password had been reset. By the time they reported it to it, the attackers had already gotten in. Is this human risk a new idea? We all know it's not. We know human beings are the weakest link. Why isn't traditional security awareness training enough anymore when it comes to human risk? I mean, it used to be that you could sit through some modules and be like, okay, I get it, I understand what a phishing email looks like, thank you. I understand the threats out there, thank you. No, no, please, I'm good. I don't want to watch another video. But we're not there anymore, are we?
Charlotte Jupp
These are becoming increasingly sophisticated. There's a lot of information about everybody available on the Internet these days. So it's very easy for someone who is a hacker or a threat actor to send you something that is really targeted to entice you to click. We've been doing security awareness training over 20 years, but people factor is still the main way in.
Beau Friedlander
Once they had access to a legitimate employee account, they could begin moving laterally using that account. To learn more about the network at mgm, request additional access and impersonate other users. Now this works better when you're at a bigger place.
Charlotte Jupp
People aren't going out of their way to cause risk to their organizations. They're under pressure at work. The goals that you're delivering on tend to be day to day, not cyber security goals. You're thinking about your business goals. And if there's a cybersecurity policy in place that might slow you down, you might look for ways to go around it.
Beau Friedlander
This kind of movement is exactly what modern enterprise systems are designed to allow because they need to. Because there's so many people using them. Employees are expected to collaborate. They have to share tools, right? Access multiple systems to figure things out, to make sure, I don't know, they all have to look at who's ordering what and where it's going. It makes the day to day work at a place like this possible. And it's also what allows attackers to quietly escalate privileges because there's so many people in there once they're inside. In other words, the breach at MGM didn't hinge on a single catastrophic failure. It happened as a result of a systemic failure. And that systemic failure is believing that human risk is not the most important factor at a large organization. When ransomware did enter the picture, you know, it was already way too late. The initial access came from a phone call that looked, from the help desk perspective, like a routine support request. And with AI, that routine request is going to look a heck of a lot more believable sound, it's going to be harder to see through. Whatever. So that's the real starting point of the MGM attack, right? Not a dramatic break in, but a moment where trust was granted because the system requires that to work, even if it's a zero trust environment. Because zero trust just means that we're going to ask some questions, and hackers are increasingly good at answering them. Listen, we all know this is the new normal. If you listen to this podcast, you know, it's the new normal. And, and nothing that we just described is unusual. And nobody deserves to be pilloried or, or pointed at as having been especially this or that in terms of cybersecurity, because there is no this or that in terms of cybersecurity when it comes to human risk. Help desks exist everywhere. Password resets happen every day without incident. People use LinkedIn because they're supposed to. If this, you know, exploit worked at mgm, it works anywhere. Losing big. This morning, Caesars, the casino giant, confirming its fallen victim to a massive cyber attack. Attack. On September 15, just days after the MGM compromised, Caesars confirmed it had suffered a major cyber attack, exposing sensitive customer data with hackers breaching their firewalls. They say the digital crooks hijack critical data, including Social Security and driver's license numbers, for customers who signed up for its loyalty program. Unlike mgm, Caesars made a different call. They called the criminals.
Podcast Announcer
According to Bloomberg, Caesar's Entertainment was forced to pay about $15 million in ransom to restore its systems.
Beau Friedlander
Caesars actually chose to pay a ransom. The Caesars disruption, surprise, surprise, was far less visible to customers. There was no widespread outages. Does that make it right? I don't know. We just gave a lot of incentive to the criminals. MGM refused to pay. Does that make them right? They lost $100 million. I mean, on paper, it makes more sense to pay to play, but I don't think it solves our problem. By the time a company is choosing between those two options, the damage is already done. In fact, the damage was already done before anything happened. The damage is just out there waiting to find a place to land. The breach has already happened. Okay. Access has already been lost.
Podcast Announcer
CBS News cybersecurity expert Chris Krebs says American companies are too vulnerable to these kinds of attacks.
Beau Friedlander
We've been in a half decade or so now of very disruptive and some cases destructive ransomware and cyber criminal attacks that we need a more muscular approach. There's no good options here. We just, we have to understand that our information is part of the cybersecurity stack now. And so we're picking between bad options and so what are the good options? The good options aren't made in a vacuum. They're made in an existing system. And they get made by a small group of people inside a company whose job it is to understand their own risk, prioritize the things that matter, and decide what the organization can live with and what it can't. Risk wise. The person at a company responsible for all this is a chief information officer, security officer, and their job is to keep the company safe. It's head of security is another way of putting it. But head of security could also include physical security. And a CISO is really just about keeping cybersecurity incidents low or non existent. There's no such thing as non existent in the, in the land of cybersecurity.
Charlotte Jupp
CISOs tend to have lots of risk across their business and you do not have the time, the money and staff to be able to remediate every risk. So it's helping you understand what are your most business critical risks that you do want to do something about so that your company is not on the front page of the newspaper as best as you can, you can try to stop that. That's across all of security, not just human risk. So then in human risk it would be surfacing to, to the ciso.
Beau Friedlander
Tell me about these dashboards. Typically, what do they look like and what for people who don't know and they're listening and they want to know why they're listening, like what, what are you, what are they looking at? Are they, is it, is it, is it like, you know, the, the deck of the Star Trek ship where you see all these lights? Or is it simpler than that?
Charlotte Jupp
In some cases it could be like that, but in a lot of cases it can be simpler than that the whole point is to be able to communicate a clear and concise picture and help give advice as to, well, what are the remediation steps? Show the risk and say, well, so what do we do? What comes next? That could be thinking about human risk, as I mentioned, it's combining data that you're getting back from your people about their knowledge, their completion, their training, their phishing stats. And you could have individual dashboards for every campaign that you run, which will give you just silo data that talks about, about what you've learned from that particular phishing campaign or trading campaign. But then the real power is when you start to bring that data together and kind of you can. Consolidation allows you to identify pockets of higher risk. So bringing that data together perhaps with information about the person behind it. So do they have access to sensitive data? Are they an IT admin with privileged access? Perhaps they work on a device that isn't up to the latest patching standard or isn't up to the latest configurations. And so helping you surface data that relates to an end user, but also applies to wider security decisions that you might need to make. So would you prioritize certain devices for patching as a result of, you know, these people high risk and therefore put them in a, in a top fear? Could it be like the example I gave earlier that you identify where an additional security tool might be a good way to minimize the risk, like something like a password manager.
Beau Friedlander
But passwords aren't the only risk. So here's the uncomfortable question. If humans are the main way in, if people are the biggest risk, doesn't that mean the obvious solution is to monitor them more closely, Big Brother style? How do you measure human cyber risk without engaging in surveillance and crossing that line?
Charlotte Jupp
Yeah, that's another great question. So I think a lot of it shouldn't be seen as surveillance necessarily. One is education. It's allowing actually having that two way conversation rather than pushing policy down the chain onto employees is listening, saying well, if you aren't going to follow the policy, tell us why and help us understand why and we can think about can we change the policy if it's super restrictive. Now you can't always do that. Obviously policies are there to protect organizations, but equally understand can we do the policy in a better way? So as an example, working with customers, we found out one customer was requiring that all of the data that left their organization going to customers had to be encrypted by an encryption tool that they used. However, through running, training and allowing as part of that Training process, people to give their thoughts. They found a vast number of their employees telling them, well, actually, we don't encrypt. They were putting their hand up and saying, we don't use the tool that you're telling us to use to send data to customers because the customer can't decrypt it. So obviously it's worthless to us and we need to send the data to the customers. This is our job. So we just don't encrypt it like you tell us to. And by doing that, that's exposed to them something that they'd never really realized before, that a lot of the business just weren't using this tool. And the reason they weren't using it is it wasn't fit for purpose. And so then they could change their approach of how they were sharing that confidential information. So they were still making sure they had an encryption policy around, but not using a tool that meant that it was worthless to the employees because ultimately they still had to do their job. They're not doing this maliciously. They have to share this data with customers somehow and give us the tools to do that. And so it's exposing, I think, things like that, making it more into a conversation and not something where we're here to watch you and push policy on you that you can't follow. If you give people the opportunity to feed into that policy, I think it builds better culture overall, where you're all in it together and you're all working towards that same goal.
Beau Friedlander
Now, the other thing that's true is protecting people from outside threats is important. And the best example that I can think of at the moment, there's a million things that could go wrong is an employee is working on something, they're using intellectual property, or they're using something that the company does not want the world to see, and they're using. Maybe the company has a, you know, gotten everyone a seat on an AI LLM and they're using that LLM and they don't like the answer. They cut and paste the prompt and text it to themselves and try it on a different LLM that they use on their own. Now, that different LLM that they're using on their own, is it in incognito mode? Is it allowing the AI to train on the data? Is your phone safe? Does a hacker have access to your phone number because it's on a people search site? How do you start to solve for the idiosyncrasies of human involvement when it comes to cyber risk?
Charlotte Jupp
That's a great question. And so it's challenging. And exactly as you say, people aren't necessarily doing something malicious there. They're just thinking, oh, I, I usually use ChatGPT works, making me use Copilot, perhaps. Let me stick the same question to my chatgpt, because I know I'll get a better output. And that's something we need to constantly make users aware of. It's not just the fact that you help educate them to understand first that the tools that you're using within your workspace perhaps are locked down. You're paying for license, your company is paying for licenses. It could be that you pay for a license at home for certain tools, but your company will be paying for a license, which means that your data is more secure. If you're putting it into that, it's not going to be shared and it's not going to be used for modeling in the same way. One way that you can look at kind of educating people is through that adaptive security, so understanding monitoring if people are using tools outside of those which are approved by the business. So like the example I gave, perhaps the business has a Copilot version, a license, but you're actually using ChatGPT. There are tools within the organization that will allow the company to see that you've potentially accessed ChatGPT, that you're putting data through and running queries through that product and nudging people in that instance, maybe through a little notification that pops up that could be through teams, it could be through Slack saying, oh, do you know you're using this? Is this the behavior? And reminding people that perhaps that's not secure. So trying to influence and kind of jump, jump in at the point that action is being taken should have much more of an impact to engage that person. They're doing something at that point in time, it makes them stop and think if they're doing the right thing in that moment. And if you have that learning in the moment of actually making the behavior, it usually is more impactful on you at that point in time.
Beau Friedlander
Well, I mean, it could be like getting your hand slapped as you're reaching into a cookie jar. But so are you saying for our listeners out there, I think she just said that CISOs might have visibility into the fact that you did share something that has company information in it to an outside LLM Talk about that.
Charlotte Jupp
Yeah, and that's interesting, I think sometimes. So obviously it depends on where you work and in your organization, but there are lots of security tools that sit there to protect you and to protect your company. That could be to protect obviously people getting in who shouldn't be getting in. But it's also making sure data doesn't exit an organization where perhaps again, intentionally, non intentionally. Another example could be you accidentally put two different email addresses of completely different organizations in an email that you're writing. And there is technology that can sit there that can warn you before, after you've pressed send, but before the email sends to say, hang on a minute, did you actually mean to mix up these domain names in the emails that you're sending? And similarly, for those web proxy tools that will be monitoring what web traffic is going on, what you are browsing on the Internet. And alerts can be configured. So in certain cases websites can be blocked. You might work in a company where you know certain websites are blocked and you can't access them on your work device. There are others that might not be blocked, but where alerts can be triggered to give warnings that certain types of activity is occurring.
Beau Friedlander
Well, so, you know, short of watching every single being omniscient and watching every single thing that every single person does, how do you distinguish between risky behavior and normal human behavior? I mean, how do we start to separate the not great from the fine?
Charlotte Jupp
And I think that would be combinations. So you're not necessarily looking at one action in silo where you could, but not normally. It'd be looking for those patterns and those could come from many different types of behaviors. You could see that perhaps. Let me take a selection. You could be someone who is clicking a lot on phishing simulation links. Phishing simulations are sent out by organizations to try and train you and help educate you in the techniques threat actors are using to get you to click. It could be you are someone who clicks a lot. You could also see from your training that you do the security awareness training that you're offered. Perhaps you don't complete it, or you do complete it, but you click through to get to the end as quickly as possible. So you're someone whose engagement was low, so you haven't necessarily taken the time to educate yourself when offered, and then you make the mistake of clicking in phishing simulations. Additionally, then if perhaps certain behaviors such as using LLMs which are authorized, or maybe you are offered a password manager, but you choose not to use it, so you could see that that usage isn't happening. You're someone who perhaps is promoting yourself, and maybe worst case, you are someone with privileged credentials to a really business critical system application that you're using. Or maybe you are in the finance team and have access to lots of sensitive data.
Coke Zero Sugar Advertiser
When you take a sip of an ice cold Coke zero sugar, you know you're getting real Coca Cola taste you love. And with zero sugar, it's so delicious you can almost taste it with your ears. Hear those bubbles, imagine them tingling on your tongue. Fizzy deliciousness. Listen to that cascading liquid. It's unmistakably tasty. All with zero sugar. Crisp, refreshing and ice cold. Ah, Coke zero sugar. Real Coca Cola taste, zero sugar.
Beau Friedlander
The after effects, the shockwaves after an attack, they don't just. There's not just one and done right, they just keep coming for a while and they become part of your world. You know, Caesar said that they had taken care. Don't worry, we took care of it. But they couldn't guarantee it. They said they couldn't guarantee that all the data that was potentially exposed was completely not exposed. MGM spent days getting back to normal days. I mean, remember it was in the news for quite a while. Both incidents had a, had a consequence, which is SEC disclosures became mandatory at that point and that included FBI involvement. Right away, state regulators started asking way more questions. The FTC wanted information about data security practices from MGM and probably everyone else. And lawsuits followed, of course. And there was cyber insurance, but there's no way it paid for everything. At least with MGM there's just, it's doubtful, maybe they insured a lot, I don't know. I guess, you know, casinos know how to gamble, so. But so do insurance companies, so I'm not sure about that. But after the dust settled, MGM announced they were going to spend tens of millions of dollars in new security and investments. Changes to access authentication. I don't know if they did anything about, you know, protecting personal information. I hope they did. And none of that made headlines because why would it? Also because if you're like, hey, everybody stop. This is how we're protecting ourselves. It's just not how it works. Now, we're not talking about a one off failure, right? It was a stress test that showed how modern companies behave once identity systems fail. And if you want to look at it through that lens, and I encourage you to the whole thing, everything about our lives today is a stress test. A stress test for, like, how we stay safe.
Charlotte Jupp
Humans who don't work in cybersecurity aren't meant to be thinking about this every single day of the week. We as cyber teams do, and we have to help and protect those people, to make sure that those people are protected and that they can go about their day to day and function. So how do we make sure that we stop those mistakes having bigger impacts if we can?
Beau Friedlander
All right, so if you zoom out from MGM and Caesars for a second, there's a bigger pattern here. Both of these attacks didn't start with some secret internal system. They started with information that was already out in the world. Names, roles at the company, relationships, who they knew, you know, family, a lot of context, things that weren't classified, things that were in fact on online for free. You know, you could Google it. Things that were just there, the same dynamics that made MGM vulnerable. Those do not stop at the edge of a company. You know, they show up in our individual lives, they show up at where we work, they show up everywhere. So how easy is it for someone else to build a picture of you, who are you, what you do, who you're connected to without ever interacting with you directly? If you go and look for me online, and this is an open challenge to anyone listening to the show, you're not going to find out very much. Not even where I lived in 1999. I'm not there. And if you really want to figure it out, you're going to have to do some digging on those genealogical sites, but you're going to have a hard time because it's all been wiped out. And it's part of my own cybersecurity protocol to keep myself safe so that I'm harder to target. We just have been doing shows recently on the scam compounds in Southeast Asia. Now, those are the ones that you see as a text in your phone saying, are we playing tennis today? Or hey, I wondered if you were around or we having dinner. And you answer, do you answer? I don't answer now. And I'm not mean to them either because those are people who don't have a choice. They're human trafficked and they're in scam compounds. And, and so I just don't answer you, do you? That's what I do.
Charlotte Jupp
Thinking about your privacy and what data is available for you online. And it could start even thinking about maybe types of social media where you might overshare, what your, you know, you have for your brand, breakfast you have for dinner, simple things like LinkedIn. Most people will have their professional profiles on LinkedIn, their last however many jobs, what they, what their role encompassed, and maybe even just one sentence about what they did in that role. So by being able to understand then you offer up that information, a threat actor can start to see, well, what type of Data you might have access to, what type of tools you might have access to. If they could see you're in finance, you might have logins to the, the payment systems. They would expect you to have certain and they'd also expect they can learn information about the company that you work for. So your company will probably have a LinkedIn site which will promote things to make your company interesting, attract people to work at the company, obviously talk about the initiatives that you're running. Pulling together all of this data which with a hacker having AI is now really easy for them to do. It's not like they're manually sitting there looking at every person. They're pulling all of this together mean that they can have a very targeted interview, whether that be and in most cases in business it would be through your email. But like you said, if they know that you're on a help desk, we're seeing a huge proliferation now of attacks where people will phone the help desk and try to get passwords reset. Perhaps for someone really, you know, think about the VIP in the business. You phone up pretending to be the vip, get your password reset by the help desk and because you're, you're deep faking that help desk and then you're, you've got the keys to the kingdom.
Beau Friedlander
Perhaps AI does facilitate creating the story that gets someone in the door now. And AI pretexting, which is social engineering pretexting, which is just to do your, do your research, take all of that research, put it into an LLM. I'm not giving instructions, don't do this. And asking the LLM, what's the best way to get so and so at such and such a company to tell me this. What's this real risk we should be worrying about? Not hackers are getting smarter. They, they aren't. But AI is making it seem like that. The risk is that we've built systems where ordinary human moments like an off guard moment at working a help desk, a password reset or a public profile that has a little too much information, it can cascade into massive problems. It's not Ocean's Eleven. I've said that. It's not Fort Knox. It's not an attack on Fort Knox. It's actually probably harder to hit than Fort Knox. Maybe not. There's more artillery at Fort Knox. It's a phone call away though. And frankly a phone call made possible by information we freely share online with companies that we do business with and more. If our security systems only work when we all behave perfectly, is that really security? Charlotte Jupp thank you so much for joining us. Charlotte Jupp is the VP of Customer Success at Alex Out Think and really grateful you could join us this week.
Charlotte Jupp
Thank you very much. Really enjoyed speaking with you.
Beau Friedlander
Okay, it's time for the Tinfoil Swan, our paranoid takeaway to keep you safe on and offline. Now, if you're listening to this and you're like, oh, there goes Bo again trying to sell me, delete me, think again, because I'm not. Everything that you need to do, you can do. And if you don't believe me, go on Reddit and look at some of the comments that people have to say about personal information removal companies. Now it's true. That said, I don't care how you do it, but you need to do it. If your information is out there online and you are concerned about cybersecurity, you're sitting there every day with a layer of stress you don't need. And it's one that you can easily deal with yourself. Not easily, but you can just actually go on Delete Me site and use the DIY pages to figure out how to remove yourself from all of the different brokers. Or, you know, you can use a personal information removal service that hits 750 places and does custom removals and everything else. And I know this is starting to sound like an ad, so I am going to shut up. But do it because it is a layer of the cybersecurity puzzle that is very easy to solve for and very important. And that's it. That's your Tinfoil Swan. I hope you're not mad at me for talking so much about our core competency, but there you go. That's it. Thanks for listening. See you next week. What the Heck is produced by Beau Friedlander. That's me and Andrew Stephen, who also edits the show. What the hack is brought to you by Deleteme. Deleteme makes it quick and easy and safe to remove your personal data online and was recently named the number one pick by a New York Times wirecutter for personal information removal. You can learn more about Deleteme if you go to joindeleteme.com wth that's joindeleteme.com WTH and if you sign up there on on that landing page, you will get a 20% discount. I kid you not, a 20% discount. So yes, color me fishing, but it's worth it.
Podcast Announcer
What is healthy spirituality and how does it help us thrive? We explore these questions on the new season of with and for hosted by me, Dr. Pam King. Within Four Bridges psychology and spiritual wisdom to help you thrive, featuring conversations with experts like Self Compassion pioneer Kristin Neff and author activist Parker Palmer. So go ahead, follow within four, hosted by Dr. Pam King. Wherever you get your podcasts Curious about the future of healthcare? Tomorrow's Cure, the chart topping and Ambi Award finalist podcast from Mayo Clinic brings it to you today. I'm Kathy Werzer, and in this new season, I sit down with researchers, doctors and industry experts who are leading the way in medical innovation. From cutting edge technology to breakthrough treatments, we'll explore how new solutions are improving and even saving lives. Follow Tomorrow's Cure Wherever you listen to podcasts.
Date: February 10, 2026
Host: Beau Friedlander
Guest: Charlotte Jupp, VP of Customer Success at Outthink
Featured Contributor: Chris Tarbell, Former FBI Agent
This episode takes listeners inside one of the most high-profile cyberattacks of recent years: the MGM Resorts breach in Las Vegas. By dissecting how a single phone call led to $100 million in damages, the hosts and expert guests spotlight the persistent risk humans pose to even the most tightly controlled environments. The episode breaks down how threat actors leverage social engineering, the evolution of human cyber risk, and what organizations (and individuals) can do to safeguard against the weakest link: people.
Background & Impact
Not Just MGM:
The Human Factor
Swiss Cheese Model of Defense
Attack Vector: Social Engineering
SIM Swapping & Phishing
Evolving Threats
Workplace Pressures
Lateral Movement
The “Help Desk” Weak Point
Contrasting Responses
No Easy Answer
Chief Information Security Officers (CISOs)
Dashboards & Risk Identification
Monitoring and Privacy
Adaptable and Responsive Security Measures
Pattern Recognition
Minimizing Your Public Data
Be Wary of Oversharing
AI Enables More Sophisticated Exploitation
The Real Risk
"Between 60% and 90% of breaches start through the human element."
– Charlotte Jupp (02:46)
"If this exploit worked at MGM, it works anywhere."
– Beau Friedlander (10:32)
"Cyber risk is now an inescapable, persistent condition."
– Paraphrase of Beau Friedlander’s analysis (13:41)
"If you give people the opportunity to feed into that policy, I think it builds better culture overall, where you're all in it together..."
– Charlotte Jupp (19:58)
“If our security systems only work when we all behave perfectly, is that really security?”
– Beau Friedlander (34:56)
As Beau sums up:
“If our security systems only work when we all behave perfectly, is that really security?” (Beau Friedlander, 34:56)
The episode urges both organizations and individuals to rethink how they manage—and minimize—their digital footprints, emphasizing education, proactive policies, and constant vigilance in an ever-evolving landscape of threats.