
Loading summary
A
Hi, it's Jim. We're taking some much needed R and R, but I'm posting some of my favorites from this year shared between both of our podcasts, trending and Cybersecurity Today. Cybersecurity Today. We'd like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at meter.com CST.
B
This is how we're wired. We cannot be continually at a state of heightened vigilance about a particular thing. This means that this kind of awareness goes away. We expect it to go away. It is not stateful, it disappears. This is to say that if you train somebody on something once and you provide them awareness, it is not a software patch. You cannot expect them to maintain that state of awareness. You should be looking at what happens to that awareness over time without reintroduction of new stimuli.
A
Welcome to Cybersecurity Today. This is the, what I call the research show. And about once a month we try to do a show that's basically themed on research. And I've been really fortunate. I have a couple of great people for the show today. Michael Joyce is the CEO of the Human Centric Cybersecurity Security Partnership at the University of Montreal. He's also a PhD candidate there. Welcome, Michael.
B
It's great to be here.
A
And my friend and colleague David Shipley, who you know is one of the hosts of this and maybe even as the CEO of Boseron securities. But David's been doing a lot of research into cybersecurity and culture. And so David's back on the show, he with a new report that Beauceron has done and some. And I know you're going to be shocked, some opinions perhaps.
C
Thank you, Jim.
A
Yeah, Michael, just can you tell us a little bit about the work at the University of Montreal? I don't think a lot of people know about it. If you can just give us a little bit of background on your partnership there and some of the things that are going on at the university.
B
So the Human Centric Cybersecurity Partnership, it's hosted by the University of Montreal. We're within the International center for Comparative Criminology, which is a world renowned center for criminology. The University of Montreal actually has one of the largest criminology research groups anywhere, as far as I'm aware, and they have a big contingent of cyber crime and cybersecurity researchers. Our scientific director from there is Dr. Benoit Dupont, who's very well respected when it comes to cybersecurity and cybercrime. But this is a group funded under the Social Sciences and Humanities Research Council. And we have researchers from across Canada and across the world who focus on cybersecurity from a humanities and social sciences perspective. That's not to say that there isn't any technical aspects, because it's cyber. There must be. But what they're doing is they're going humanities, psychology, behavioral sciences, political science first as an approach to look at these problems to try and find some more lasting solutions. So we have research projects on everything from how does Canada better defend democracy? And the Canadian institutions so that they remain very much Canadian in the way that they run within the cybersecurity environment we have all the way through down to how do we better design interfaces for security? How do we put security in the hands of people in a way that they can actually make use of it in ways that are usable and actually helpful for them in their daily lives so that they don't just try and find a way around it?
A
I'm glad you've housed it in the criminology division because it's criminal how little people know about research in Canada at the universities and where we stand in terms of our stature in the world. We have so many people who are experts contributing to this field. Not a lot of profile for that, but people could be quite proud of what's happening in Canadian universities, and we don't get enough of a time to give them a shout out.
B
Absolutely.
C
And I think my background is I'm a liberal arts guy. My undergraduate degree is in information and communication studies. My master's degree in business, which I know is a shock to hear all the business folks out there. It's also a liberal arts degree. I think we bring a lot to the table to complement a lot of the kind of the STEM science focus in this field, which has been dominated by computer science, electrical, computer engineering and more. But the reality is this messy problem of cybersecurity. It ain't just the computer, my dudes, it the people and the computer together.
A
But I think everybody knows that, David. I really do. I think people in the industry know it. We just don't recognize it, that's all. Like, every time you get people in cybersecurity together, they will tell you, the technology is not the problem, buddy. It's culture, it's people, it's behavior. Those are my problems. And then we'll immediately start talking about the tech again because it's interesting and I think that's a problem, is that we admire the problem of culture, but not enough people take a hard scientific approach to it to say, how can I learn something? And we've talked about this, that generates an outcome. And I think that's, you know, in fairness, and I come out of academia as well, and I'm not dismissing it, but we don't focus enough on the outcomes that we're trying to bring about. And I think that's why people sometimes feel our research isn't as relevant as it might be. We can also talk later about general tech research, but that's another story.
C
Well, I think there's a comfort in trying to keep this conversation about speeds and feeds, about antivirus, about email filters, about technological controls because those are perceived to be mathematical, understandable, quantifiable, fixable. And then there's this messy human thing. Like there's very few people, people I know who are hardcore started in it that are people, people people, right? They love technology or their results, people.
A
There's a famous old joke I hate, I've told this joke so many times, but the, the guy who, he sees a drunk out in the street and the guy, guy's crawling around on the street and he says, what are you doing, mate? And he says, I lost my car keys. He said, where'd you lose him? He said, over there, across the street. Said, why are you looking here? And the guy looks at him and says, because the light's better. And you know, I think that's something like what we do. We pursue the things we know. I mean we're, and again, I think it's this hard outcomes that we're looking for, but we pursue the things that we know. I do this, I get a result and that's what fascinated. Plus a lot of us got into technology not because we were interested in psychology, we were interested in technology. And so those two things combine and I think your partnership with Michael is one of those things. Now what I want to explain to the audience is that David's firm has done a lot of research and you've turned that over to Michael, who's an independent researcher. Why'd you do that?
C
I think number one, we don't have all the skills or the background to understand all the things. And I think that takes a moment of being self aware and humble to say, okay, one, we need to work with the smartest people we can find. And you know, when I first met Michael, it was at Bsides Montreal and he took me for a coffee in a couple of minutes. I realized there were whole angles on the human side of cyber. I wasn't even thinking about it. It was Michael who introduced me to the concept of, hey, your brain needs energy, it needs calories to compute properly. If it doesn't have enough calories, it's going to do interesting things. So this isn't just about David educating people and saying, now you're smart. And then from there it opened up a lot of my thinking around psychology and I realized this is a really smart guy I can learn from. The second part is, in addition to having skills that we don't have and perspectives, and we benefited so much from this, this is the one thing I want to say to you folks in our space, go find researchers who know things you don't. The other part is that it gives greater validity to some of the findings that we can say is that, you know, the reality is I am a company. I have a profit motive. I have a desire to grow and thrive as a business. And you know, that can be seen as tainting my perspective on the research. I even have to be self aware of that along the way. By trusting and building trust and business and handing off to independent researchers, not only do I get new knowledge, I get new validity to the knowledge that's gained.
A
Yeah, and I think that's fair. One of the critiques I have of research in technology, or what passes for research and technology, is that it suffers from something. People are biased because they're in business. Many times we go looking for the answers we want and it's a constant fight. It doesn't matter what field you're in or what drives you. It's a natural human thing to have a hypothesis and go looking for the proof of it. It's run through all of the sciences. Michael, I want to bring you into this conversation at this point. So David's, turn some of this information over to you. What interested you about this project, and maybe we can also talk about what the data is and where it takes us.
B
Sure. I mean, it's interesting to hear your perspectives on the objectives of solving things. I often like to think that the difference between doing research in a company and doing research as an academic sort of resolves around the interpretation of time. I'm not out to solve a particular solution and particular solutions have a timeline to them. If you don't make the company profitable before the company goes out of business, there's no point trying to make it profitable anymore. If you have security issue X and it's not solved before all of the data is lost, then the parameters for which you're solving for are different. As an academic, the objective here is to create an output that stands the test of time by itself. You have to try and create a piece of work that answers all of the questions that anybody is ever going to ask of it without you there to rephrase, to refine, to interpret for them. So you really need what you're doing to be very robust. You need to make sure you've dotted every I, crossed every T. That's what the review process for is for with your peers, to make sure that you haven't missed any questions that might come up later on. Because you're trying to not so much solve a particular problem, you are trying to make the world a better place, but you're not trying to solve a particular problem right now. What you're trying to do is create knowledge that other people can uses a foundation to then solve problems. So you absolutely have to make sure it's right. And because of that, it sort of changes your perspective. So it was great to work with David because he basically we managed to work out an arrangement where he said, here is all of the things. Here is the data that we have in a format that we're able to ethically share it with you. That doesn't put anybody at risk, which was very important to both of us.
C
Have at it.
B
Basically, he didn't point out a particular solution that he needed. He didn't point out a particular problem that he was working on or a business case. Just have at it. So that was great. That was good because I have ideas and concerns and theories that I wanted to test. And having such a large data set, this is a large data set. So if we're looking at just the Canadian portion of this data set, let's say 20, 24, 700 companies, quarter of a million individuals. There's not a data set like this that's being looked at by scientists that I know of. A lot of them have maybe thousands of researchers, but it's generally within one company. And each company has its own culture, so it's not as generalizable from that data. So this data is rich and it's very helpful. And it's based on what people are actually doing, not just survey results, but it's looking at the actual interactions that people have with their systems in the real world. So it enables us to do things and answer questions that haven't really been approached before.
A
And I don't know how many people actually realize it or read the Fine print on the research reports, some of them good, some of them I will call so called research. But in North America, if you get a sample size of 1500, you are singing Happy Days are here again and that there are things you can do statistically within that size of a sample. I'm not going to trivialize that. But the holy grail of research is a bigger sample and you seem to have gotten that. What made you interested in this in.
B
Terms of cybersecurity at all or this particular data set?
A
Well, this particular data set, but cybersecurity in general, it's one of the places you can apply your research talents. But this data set in particular.
B
So I've been doing knowledge mobilization around cybersecurity for more than a decade. Knowledge mobilization is where you try to take the things that we have learned primarily as academic researchers, but there's also stuff that comes from practitioners best practices and you try to get that to the people who absolutely need it. You know, the people putting posters on walls in organizations, high schools, things like that. So I've been doing that work for quite a lot of time. And one of the big pushes in cyber security awareness and with this knowledge mobilization is Cyber Security Awareness Month. It happens every October. It's been going on for a long time in Canada. Canada was quite early getting into it, originated from a nonprofit in the United States. So it's a big push where companies invest in trying to raise awareness around cybersecurity issues for the month of October. So having been involved in this and looking at that and seeing all of the work that's being done on it, one can't help but wonder, does it do anything? Is there a point to putting all of this effort into awareness for the month of October? Does it change anything? So I think that that was the place to start because finally we have a data set that speaks to Canada, across Canada, and is large enough that we will be able to see the effect of something, even if it is far away, like a government program to raise awareness. We should be able to see the impact on it because of the size of the data set that we have.
A
What did you find?
B
So what we found was I expected this to fail. I was hell bent on starting very negatively on this because with what's happened with Twitter and Facebook, I had the feeling that maybe all of this stuff on social media is just sort of in a bubble somewhere and nobody actually doesn't make any real difference. But that's not what I found. To my happy surprise, I guess we found that In So if we just look at the years for 2023 and 2024. So for those that are playing the robustness game with us, in 2023 we had 580 organizations, 146,000 individuals in the data set. In 2024, 700 and 227,000 individuals. So that's. Those are the numbers for basis that we're talking about here. But in 2023, during the month of October, there was an increase of 13% in the number of phishing email simulations sent. And in 2024, an increase of 23%. What that says is that organizations are taking part, they are doing something different as a result of Cyber Security Awareness Month. This is a change that happens in October that's ostensibly driven by this program that we have. So we know, number one, we've learned that companies are doing something different as a result of this program. So that's a good thing. That's a change in behaviors from organizations. Secondarily, the rate of phishing clicks. So these are the clicks on phishing simulation emails. 2023, there's a drop of 12% during October 2024, drop of 11%. So we can say around 11% of a decrease in the number of clicks or the click rate for those two years. So that's something as well. We can say that during the month of October, while all this awareness is going on, there is a decrease in the clicks on phishing simulation emails of above 10%. 10, 11%. That's positive too, right? That's a good thing when it comes to actual cases of phishing. So these are not the simulations that are sent. These are the real world phishes that have managed to jump past all of the different technical controls that organizations have and make it to someone's email box and that person was aware enough to make a report about that real phishing incident. There was a change there as well. So in 2023, there's an increase of 16% of reports for those and there was an increase of 6% in 2024. The thing to keep in mind there is we don't have an absolute number. We don't know how many phishing emails actually made it through all of those technical controls because by nature they are undetected. So we can't give you a rate for that. So that's just relative to the other months. What is the change in those? So those are the good, positive things that come out of awareness month and they can tell us that awareness does do something. The interesting thing here though, is that in phishing simulation reporting. So the reporting of the phishing simulations, there is a decrease less of those are reported. So in 2023, a drop of 5%. In 2024, drop of 7%. So this is not the behavior that we would want, but it might speak somewhat to perhaps security fatigue. There could be an upper limit to the benefits that we receive from doing awareness as we do in the month of October. So that would suggest to us that it's not necessarily the case that, gee, October Cybersecurity Awareness month works. Maybe we should just do that all year round because there could be a negative detraction from that. People could get annoyed with it and not want to contribute to the company cybersecurity anymore by reporting those phishing simulation emails. Phishing simulation email reporting where someone has identified a phishing email drops during Cybersecurity Awareness Month. We don't know absolutely why, but a theory that appears to fit with the observations that we have here is that it could be that there is so much awareness that people's feelings of benevolence, or perhaps even they feel that they need to contribute because they see so much cybersecurity stuff going on. They think the staff have got this actually decreases. So we're not entirely sure why. But it could point to a negative outcome from overdoing cybersecurity awareness. It could just be past the point of saturation and people are no longer as willing to contribute back to the company by reporting the email. They're not clicking on them, but they're not reporting them either.
C
And where this is interesting. So last year, Jim, around the fall, I talked about some of the findings we had about the differences in fish frequency and the outcomes that we observe. When we bracketed into three outcomes, we said, okay, what happens if you fish more than monthly? So daily, weekly, bi weekly, that was one group. What happens if you fish monthly? What happens if you fish less than monthly? So every two months, every quarter, once a year, those are the three groupings. What was your phishing simulations? Right. And so the, the folks that fished more than monthly, the click rate average across that cohort was 3.5%. The report rate average for that cohort was 20%. And again, these are averages. When we looked at at less than monthly, the click rate average was far higher, was 5% and the report rate was lower, it was 15%. Then we went to monthly, which turns out to be Goldilocks in our story. It has a average click rate of about 3.05% across all organizations. And a report rate of 25%. And so this idea of security fatigue is seen in another context outside of Security Awareness Month, because this data is continuous data. So over rolling 12 month basis, what does this actually look like? Based frequency. And so what was really interesting about that is it drives into question of what is the right frequency at which to do this? Because in our work, and we continue to plumb the depths of this, we're not even done, one of the questions we have is in economics there's a concept of diminishing returns and we want to be efficient with people's time. There is a value to raising awareness and sustaining it. But can you overdo it? Yes, there's a story circulating on LinkedIn that everyone's getting a good chuckle about, but it also speaks to the worst instincts of this industry. It's a story about a woman who apparently fell victim to a fish lure based on Taco Bell, who was actually hungry when she fell victim to it and arrived at work expecting a Taco Bell and got a three hour training session instead. No, guys, this is not the way to do this. You know, there's a way to do fishing well, there are a lot of ways to do it horribly.
A
This was a fishing simulation based on Taco Bell and when they brought her in, they threw her into a class which they called Taco Bell Alcatraz.
C
And I don't know what it's called, but according to what I've witnessed on LinkedIn, you've got one group of people going, haha, serves you right. Here's your three hours of training. And other people who are absolutely horrified, look what you're doing. This is an abusive process. Anybody can click true. And is this the best use of her and or company time and resources at three hours? My preliminary sort of analysis, let's say three hours, way overkill. And if she took it jovially, great, but a lot of people won't and they can actually become a security detractor. So again, my point about all this wrapping back is that we have so much work to do in this space to say what is the right way to do this at the right frequency, within the right boundaries and other things. And it's not that I'm coming at this space and saying any kind of phishing simulation done any way, with any type of lure and any type of training outcome is good. No, that is not what I'm saying. But done well ethically, for a purpose, efficiently, it can positively impact and reduce risk. One of the major criticisms that I Often hear from the technology side of the house is, well, people still click even after you train them, therefore useless. My dudes, people still get into car accidents after driver training. You know, A, training is perishable and B, vigilance is what we're trying to do. We're not. And humans are humans. There's lots of things that could happen, but there's benefit to cutting down on traffic accidents and there are benefits to cutting down on human behaviors that put pressure on security controls.
A
But something I keeps jumping out at me when we talk about the research that you did last year and some of the stuff that's going on now, this idea of timing and optimizing this is important from two aspects. One is results. The other is the investment of time. If you've got an employee, you can put in a class for three hours and you don't have a substantial loss for that, you've got a problem. And I think you've got a little bit of irresponsibleness there too, in terms of the punishment type of aspect. But we can take that for another day. Michael, what your findings on this.
B
We'll get to talking about decay awareness. Decay in a second. David mentioned a word there which was vigilance. And I think that perhaps at the core of a lot of this misunderstanding, let's say about cybersecurity comes from differing interpretations of what the word awareness means. Aware is a very old word, predates in some form, most European languages. If you want to get an idea of the depth with which I've been researching these kinds of things or how.
A
Long you've been around, jeez, it's been.
B
The word where was often used when people were riding horseback to point out a hole or a snake or something like that. But the word can be taken to mean to become aware of something, which is how we quite often use it. So you were not aware of something and now you are. That's a state change. This seems to be how it's generally used. If we look at the NIST documentation, they describe cybersecurity education as a continuum from awareness. You weren't aware about cybersecurity. Now you are all the way through knowledge to expertise. At some stage, you're a cybersecurity expert. But cybersecurity awareness, under that sort of idea of what awareness means is a state change. And it's not reversible once you become aware of, you are aware. Now, there's a different interpretation of awareness that we might understand as situational awareness, heightened awareness, vigilance, this kinds of use of the word awareness, which is to be able to direct your attention to something that might be of danger to you, such as a hole for your horse. So this is heightened awareness, this is vigilance. This is something that cannot be maintained because otherwise that is just your now current state of being. So if I direct your attention towards the snake, for a while, you will be looking at the snake. Day you lose interest in the snake, you start looking at other things that might be of danger or interest to you, like the sandwich. This is how we're wired. We cannot be continually at a state of heightened vigilance about a particular thing. This means that this kind of awareness goes away. We expect it to go away. It is not stateful, it disappears. This is to say that if you train somebody on something once and you provide them awareness, it is not a software patch. You cannot expect them to maintain that state of awareness. You should be looking at what happens to that awareness over time without reintroduction of new stimuli. And that's what awareness decay is. We started to look at. Okay, so for the people who have received training on phishing, what is the rate at which there are skills to identify effectively this phishing decay? If you effectively identify a phishing email, you will do one of the two clever things. Unfortunately, David, the smartest thing to do with phishing is nothing because it requires the least effort. You don't even have to make a final decision on it. You just ignore it and let it disappear down your inbox. That's not great if you go back down into your inbox later and click on it. But that's the most efficient thing to do. Next on the list is probably delete, depending on where the keys are, which one is closer to you. After that it's report. So for the people who did something clever, or in this case didn't do something clever, by clicking on the actual phishing email, we measured what the change was in their ability to do the not clever thing or the clever thing. So the cleverest thing to do from David's perspective, would be to report. So we measured people how long it was after receiving a training that people would report. And we also measured how long it would be after someone received a training that someone would click. And these are only the people who clicked or reported. So we're not looking at everybody. This is a percentage of a percentage. Nearly 70% of people do the smart thing calorifically and do nothing. So we have no data for them really, because they didn't do anything in the sample, 28% reported and 3.6% clicked. So we're talking a small fraction of the total phishing email sent. So for the ones who reported, the ones that did what David wanted them to do, we notice that there's a decline. So immediately after receiving training, there's a 98% chance, or 98.4% chance this is probabilistically that they will report the phishing email that receive. Then after a month, that drops to 97%. After two months, 94%. After three months, 90%. This is a curve. So it then drops off, you know, down to after 120 days, 84%, 180 days, it's half a year, six months, 60%, and then down to after a year, 4%.
A
So this is the amount that we'll report.
B
This is the probability that someone who has received a training will report and the timeframe which in those probabilities change. So if someone.
A
Those numbers, at least in the early months are pretty good.
B
They're pretty good. The training's doing something, it's doing something. There is a change over time, though.
A
And then it just falls off a cliff. But aren't we dealing with two issues here? You've got one, awareness or education. And I know that this is probably fishing, so I'm going to ignore it or I'm going to do something and then a motivational challenge, which is I'm going to take an action. Aren't those two separate things?
B
Potentially, but let's just assume that people are generally motivated to avoid harm, which a phishing email should be recognizable as. Let's just assume that they have a general willingness not to click on the bad thing. So whatever their instincts are, we'll just assume that, given the size of our sample, that if there are some bad actors in there who are deliberately clicking on the phishing links, there have been cases of people who were just damn sure that that's phishing. So we'll assume there's a very small percentage of those people within there. But we have a big enough sample that they should sort of even out towards the edges. There might also be people who report.
A
Those are people avoiding. I agree, there are people that most people don't want harm or don't want to harm their organization. But taking an action and saying, you know, not clicking on it is one thing. Taking that action to say, I'm going to be part of the community, I'm going to be a good person, and I'm going to report this is a separate Issue in terms and a separate motivation for that.
C
I think that's a third state of being. And this is because Michael introduced me to the concept of people being efficient is not people being bad. That this is an evolutionary design. And now you incorporate this in talks I give around the world. Hey, man. The human brain evolved for System one and System two for a reason. Because we didn't always know where the next meal came from. And brains burn a lot of calories. So there's a part of reporting is benevolence. I want to protect the community, I want to protect the organization. Because we like to emphasize that deleting absolutely does protect you. They're not wrong from that perspective. But reporting can help protect everybody because it can raise the alarm. So then you're tapping into a sense of community benevolence, etc. But the other deeper opportunity is that reporting is more beneficial to you than not reporting. Well, what do you mean to me? And this is where closing the feedback loop on what the hell was this email in the first place is so important. Because right now, in the vast majority of instances, the report a fish button is the cybersecurity version of that banner in the movie Office Space. Is it good for the company? Well, yes, it's good for the company, but hilariously, that means it's not necessarily good for you or. Well, what's. What's in it for me? The what's in it for me is an important thing. So now we have the opportunity with the way that we close the feedback loop. When people report an email, they actually get something back. Hey, yeah, this. And more than just simulations. Because if you just do this for sims, we've seen this in our data set. People will stop reporting in general because they report non sims and they don't hear anything. And that's inherently demotivational. So when we close the feedback loop for non simulations, they actually. Oh, this actually did come from hr. All right, great. This was actually useful to me. I needed to know if this was legitimate. It was. Oh, this did come from DocuSign, but. Oh, here's a caution if I didn't expect it. DocuSign can also be abused. And here's who you can talk to to get some more help on this problem for this thing that's valuable to you. So we got to make it worth more. If we want to get those report rates above that 28% on average. And we see this in organizations where they actually close the feedback loop, their report rates are far higher. It's 50% plus because it's worth something to do it. The calorie bill was satisfied with a response that was useful.
A
I have to say one of the things that shocked me most reading through your latest research report was the concept, and this is the second time I've read this in a report that both of them being from your company was that people wouldn't respond to people who had reported. That just blew me away. I mean if you don't even get a thank you or a good citizen pat on the shoulder or anything for this or and even possibly some reinforcement that says hey this is good, this really wasn't a phishing email, but whatever. To just throw this stuff out there, expect people to continue to do it without responding them, just makes zero sense to me.
C
So it's a scaling problem. Let's look at my largest customer, a couple hundred thousand people. I'm going to obscure that a little bit to avoid any identification. But into the hundreds of thousands plus people they get at least 50,000 non simulation reports a month. Remember, you don't get to control this volume. The attacker controls the volume. And then your email filter efficacy has some downstream effect. To do that manually, we estimated it would take a team of eight to 10 people, 24, seven to do that. And it's a multimillion dollar bill and I can tell you that nobody values that at that scale. We developed a solution. We did so that it would actually do it using a and other things. Look me Jim, like I'm a learning being.
A
Using what?
C
AI Jim. We use AI Michael. I'm dragging you into our constant back and forth but we use machine learning to actually help and then we use programmatic logic structures to give customizable feedback. This is what we get our first patent for in the US and it, it makes a huge difference. But you talked about why we partnered with Michael. If I hadn't been introduced to the concept of thinking about the calorie value of doing an action, I wouldn't have appreciated why it's so important to close that feedback loop. Other platforms know this as well. If you don't close feedback loops, people stop doing the thing.
A
Simple poll academic response to that is going to be duh.
C
And yet. And yet Jim, like in, in our Data set, still 80% of organizations are closing the feedback loop on real emails.
A
Michael, I want to bring you in back into the conversation. What are the other insights that you have as a researcher into things that we might not be doing or that we're not discovering from the data we.
C
Have Looking at the clicks.
B
I know nothing about arrogantly ignorant technologies, so you'll have to leave me out of the AI. The AI discussion. I think you raised an important point when you were talking about reporting. One of the things that we have discovered is that reporting is not the opposite of clicking. It's a different beast. It has different things attached to it. So I think it was a good instinct to point out that there are other things associated with the motivation to click on something. It's not just self preservation. It appears there are some other things attached to that. So full marks for pointing that aspect out on the reporting rates. And we can tell that because it's different from the decline curve for clicking. So if we look at the same data set, the same reduced, smaller group, out of the 250,000 people, the ones that had an interaction, a clicking interaction. So this is a very small, very small population of that. The likelihood of clicking on another phishing email immediately after receiving your phishing training was about 3.5%. That is an interesting point because it means it doesn't matter. Stuff happens. They could have done their training, slipped over a ham sandwich, clicked on the next phishing email that comes in. Life is life, things that happen. So the idea of 100% when it comes to anything in the real world, I think that's something that we should probably give up right now because this points to there being a 3.5% probability that you are in the best state you will ever be. When it comes to awareness, things happen. We know that life's like that. You were expecting a package from Purolator. An email comes in saying that your package from Purolator is coming. You get caught up in that. This is just the world.
A
But this is also the reason why we hopefully we do research, we look at these things scientifically because we always do that, that immediacy thing. And that's, and I've seen it before, where you'll prove to someone that this is the way we should be proceeding on average and then they'll find one example, come back and beat you over the head with it. You know, well, see, this didn't happen. We expect perfection and we expect it in place even though we wouldn't expect it of ourselves. And I make this argument all the time of if somebody's making mistakes, well, you can replace them with a machine. Well, the machine will make a mistake. The question is, what's the difference in the error rate? Not that one has no errors, because that just doesn't happen. I don't think that happens in any.
B
I would suggest that generally absolutism and cybersecurity are not a good mix. If you find someone who offers you 100% guaranteed cyber security, I would take it and make sure you have a good legal team not being guaranteed cybersecurity for those reasons.
A
One reason why we do layers in cybersecurity, we know this stuff intuitively. We know that everything's going to fail. Therefore, I need layers of security. Certainly we should be applying that thinking to this one thing.
C
I want to go back with Michael's research, and this was an eye opener for me because we had a feeling that there was a decay rate for training. And in fact, we've subsequently used some of this research to, in our scoring model for how we score the impact of training on an individual, the training score value in our personal risk score model we have now decays to reflect what we're now learning. Now, what was interesting, Michael, is can you talk a little bit about what you found after. After zero days. So like 30 day, the 90 day. And. And specifically I mentioned in the 360, just. Just calling those out.
B
Sure. So after a month, 30 days, we go from 3.5%. So we know things happen and we have a number we can put on it. Now, based on this data, 3.5% is the probability that stuff happens. After 30 days, it increases to 5.7%. Wait another month. At 90 days, it increases to 9.3%. Wait another month to 120 days jumps up to 22%. After six months, it jumps up to 45%. And if we take it all the way out to a year, so a year after receiving cybersecurity training, the likelihood of someone who has clicked clicking again is 95%. So there is a curve at which these skills decay and we put some kind of probability numbers on that curve.
C
So what was interesting is that when we learned this from Michael is we saw an opportunity for an intersection between the decline in the report rate at 90 days and the increase in risk at 90 days, which I think is like 14.8% at 90 days. And so we said, okay, now we know that this is a good intervention point. Every 90 days. We now we've got our diminished return to the point where it's good to re inject that vigilance back up in theory. And this is how we're taking what we're learning and trying to run a new experiment. Can we create a flatter curve by re injecting at the right time and drawing that line between those two distinct behaviors that we want to encourage. What was also important from Michael's research is that. And we'll probably get into this a bit more because I have some critiques of some of the independent research that's been published, published around this space that makes some really good insights and a whole bunch of garbage claims about phishing training efficacy. But it's important for people to hear this. Annual cybersecurity training, as our data has shown and now numerous other studies have shown, will not move the needle in an impactful way on phishing risk. I am confident in looking at the research that we're seeing as a direction that this is not the optimal intervention frequency for this. That's not to say that annual security awareness training for other behaviors or other compliance reasons is not important. It's to say that once a year is not enough. If you believe that training as a point that Michael has raised in terms of maintaining vigilance has a decay rate.
A
Well, yeah, if you're prepared to let your whole defense decay to zero before you retrain, once a year is the way to do it.
C
Yeah, so that's really important. But the other side of Michael's finding so far in terms of Security Awareness Month is that you can do it too much. Right. So on one hand you can do it too little, which is once a year. And what we've seen in Cybersecurity Awareness month is that level of intense activity has a negative consequence in the decline in the positive beneficial behavior year of reporting. So now we've got, for the first time ever the opportunity for people to have a risk appetite and return on investment equation backed by empirical data to say what is our risk appetite? What are we willing to do? I feel like this has been an incredible value to the state of knowledge in this industry to say, hey, you can now make more informed decisions. And that's worst a lot. And then I've got some other data I want to dive into as well.
A
But let's just recap this and we want to make it a totally academic discussion or a totally theoretical discussion. There's a hard concrete results that have come out of this that say, or hard concrete data that says, hey, don't train too much. That's not going to get you the results you want. Yep, don't train too little. And you know, if you leave it a year and you do just this can be your annual rite of passage. You're not going to get the protection you want. Somewhere I think I'm hearing you say around the 90 day mark is where the data is saying this is optimal. You're not going to get too much training and burnout that happens in the mind about 90 days is really the optimal type of time to be redoing your phishing simulations or training exercises.
C
I want to be really clear. The fishing sims, our data from last year consistently shows monthly cadence has the best outcomes in terms of educating your people proactively. So outside of the sims, getting them to learn something and again limitations of our data set here, and I'm trying to be, this is me trying to be the scientist I aspire others to be, is this is the limitations as observed from computer based training delivery. That is a method of delivery, right? So getting people assigned a course, getting, taking that course and then what happened after they took the course in terms of the regular recurring behavior in the following constraints, we've learned a couple of things.
B
One is awareness does something. So the idea that awareness does nothing, we have data to suggest that awareness does something. It changes how companies interact with their employees and it changes how employees interact with the phishing simulations that they receive. So that's one thing we've learned. Awareness is a thing. It does do something. What it does do seems to have a temporal limitation on it. It changes for October, it doesn't maintain that change after October. It goes up, it goes down. So we have good reason to support our idea that awareness should be seen as awareness vigilance, awareness, heightened awareness, not awareness state change, patched humans, now they're good. So that's one thing that we've learned. Another thing that we've learned, phishing reporting is different from phishing clicking in terms of how people interact with the emails. That tells us that all that we've learned about phishing clicking doesn't necessarily translate into what we could know about phishing reporting, suggesting there's a whole lot that we don't know about phishing reporting. So lots more to investigate there. We've also verified that skills decay, which is something that's been known about for about 20 years in management sciences for other areas, also applies to the skill of detecting phishing. So skills decay is a thing in phishing, which supports our idea that awareness is a temporarily limited phenomenon. So we know that awareness does something, but it does it for a time and then it decays. And it doesn't decay linearly, it decays in a curve. So these are the things that we've.
C
Learned in our application of that. In practical terms, regardless of your platform, I think you need to look at monthly fishing simulations, ideally in the way that I recommend doing it one ethically. So you've got boundaries, right? There are certain fish types you don't do because yeah, you'll get results. But it's highly like HR is going to have problems if you do a sex theme fish. Like you're going to have a host of problems. So you got to, you cannot do all the things that criminals can do. But that's not what matters from my perspective. Doing it on a monthly randomized basis so it doesn't become predictable that people know it's Fish Wednesday because I think that that backfires and a whole bunch of other reasons. And that is progressively more difficult on the individual level. So it's seen as a learning opportunity, not just as a gotcha. And training interventions proactively every 90 days supplementing those computer based in person activities will always beat in my view, computer mediated education hands down. I do what we do to help organizations do this at scale and efficiently. But those that supplement with human interventions I think end up with a stronger program. So in terms of recommendations, there's that. But one of the things that I wanted to share on the show today is like, and this is new data and it'll tie into some of the reporting that came out of Black Hat this last couple of weeks that I think has been incredibly irresponsible. So one of the new things I wanted to share today, and this is something that Michael helped us with, the survey questions. One of the things we always wanted to do was ask people why they clicked on it. And believe it or not, it's not actually something that's well published or at least what I could find in terms of the research. And any of the stuff that has been done is usually constrained to a university environment or a small singular environment, et cetera. It's not well enough understood. And something else that Michael took from the world of biology in the last year that kind of blew my mind is he said, you ever thought about the theory of predators and mimicry? So well, what do you mean? Well, predators learn to look like non threatening things so they can eat their prey. It's just a factor in nature and it's called mimicry. So when we developed the why we click survey, which covers a couple of interesting areas, one of the questions we ask people was okay, why did you click? And we gave some constrained answer possibilities. So it's a limitation of the science we've done is we pre selected a group of answers that we were most interested in seeing and so the answer opportunities were it looked legitimate. I was expecting something similar. I was curious and I was afraid I'm going to have a moment of honesty and say I was very biased, thinking that emotions were going to be a major factors. You know, Michael was honest about what he was expecting to find from the data. And mimicry as in it looked legitimate or I was expecting something similar was 50% of all the answers and it was 4,500 plus respondents. So this is statistically significant. It's across 211 organizations. It's I think again the largest of its data set. And so it's insightful in that now things got really interesting for me. Emotions. And I'll dive in a little bit about that. Emotions. Curiosity beat fear marginally. Curiosity was 6%. Fear was 5% and it was not what I was expecting. I thought fear would be a much. And the question was for fear. It was. I was afraid not clicking would get me in more trouble than clicking to that. Something to that effect. So I'll pause there for a second.
A
Because I have a couple questions. 2. One is in terms of the. The answers that you've given.
C
Yep.
A
And I appreciate you have to pick the answers. Actually they sound quite reasonable. But is there a possibility that the mimicry answer is. Is the least embarrassing? In other words, it looked okay. I mean, and that, you know. And the other one is, I guess the question I'd have is the fear answer only works if the fish was based on fear.
C
I don't think it's the fear. It works when it's based on fear. I don't have any data to support that conclusion yet. So I'm continuing to evolve as a scientist. I don't have the evidence to support that conclusion. What's interesting is there were the other options one could argue were more embarrassing than or, or, I don't know, emotionally neutral was the. I was rushing. Like, man, I'm just human. I was rushing, which was 17% of all clicks. I was rushing. The other one that blew my mind was 21% of people said, I don't remember doing this. And so what that left me with was 21% of people don't remember doing it. 17% on reflection go, oh shit, I was rushing. People that don't remember don't report. So there's some good conclusions. So they didn't remember doing because they didn't even do the other beneficial behavior from the data. But what that means is that 38% of clicks, I hypothesize have more to do with the how we work with email than any of the inherent knowledge they brought into the battle. That if we, if we say that phishing training helps you detect the signs of a fish and that system two thinking is that logical structure paying attention. And as Michael would explain calorie expensive mode of thinking. But that's, that's not 40% of clicks. 40% of the clicks are people being efficient. So then if phishing training does not teach people about work style and giving themselves enough time to do this and other things and admittedly our style of training is evolving to start to use this data and to iterate this experiment. Say what if we talk more about this? Can we move this needle around this almost 40% of clicks, then it's not going to work, right. Like you can beat that bell to the day. The day ends that look for the sender, look for the link, look for typos, by the way, looking for typos. Super bad idea because most of the attackers, a lot of attackers are now using grammar tools. But, but that's what's really, really interesting from the data for me is that it, it gives an indication. And this is the part I love about working with researchers. If, if you approach this as I'm going to go to academic XYZ and I'm going to get the answer to my question and I can go make bank, I think you're going to fail if you partner with them and say, I have these things I don't know and I want to understand more about why I don't understand this and then I want to continue to learn more things that I don't understand. It's going to be an incredibly fulfilling experience and it gives you a chance to iterate and improve, but it's never going to be the wizard at the end of the yellow BIC road.
A
Michael, I want to bring you into this because I think this has got to be the holy grail of this is finding out why people click and why they don't. I think the bad guys and the good guys would both like to know where do we go from here in trying to refine that research?
B
I think we're progressing. We're getting beyond the initial stages of identifying that there is a problem and that people do click on things. There's an unfortunate reality where it takes conservatively at the fastest 10 to 15 years to fully understand a problem. If you're going to research it internationally across all disciplines, it can take two to three years to get a paper published, let alone get some feedback on it and update it. So let's say it takes 10 or 15 years. People like David have to go out and start solving problems immediately. They can't wait 10 or 15 years before they release a product. So we're advancing, we're advancing. I think we're at the stage now where there's some very interesting research that's looking at pretext, that's looking at context, that's looking beyond just the psychological differences between people that might make them more or less at risk. I would suggest that in a country like Canada, where we value diversity, looking at diverse elements psychologically and trying to pinpoint the ones that we think are not good for our company is not a useful set of scientific research anyway. But we've been through that. We've done that. We've learned a lot of good things from that. We're now getting to the stage where we're getting into the more detailed things, the things that might help. So NIST advancing its fish scale to try and come up with a way of rating how difficult fishes are. So we're not comparing the world's easiest fish against the world's hardest fish. When we're doing these metrics, that's an excellent step forward. The people doing good work, looking at what were you doing just before you got fished? Were you waiting for a package? That's very good stuff as well. Also, this work that we're starting to do and we're starting to look into as to how much of our response to an email coming in is instinctive. Was that driven by stress? Is our job the type of job where we have to process a thousand emails every hour? How much time do people actually get to process these? Can we start training people in different ways so they get better at leveraging not thinking about email rather than just berating them for not thinking about email? And I think there's a lot of places that we can go. I think knowing why people do things is perhaps something we may never get to, because that debate has been going on in general psychology for a very long time. And that really comes down to where you land in terms of determinism or free will.
C
Well, one thing I want to focus on, one of the cool things about our data set is the ability to go back longitudinally over time and say, okay, well, what's interesting about this group and what I want to share about the fear group, November, this was only 5% of the respondent answers. And remember, I've talked about average click rates and our average click rate. I think Michael from The data set that you mentioned was in the ballpark of 3.8%. Just keep that in mind, you know, between 3 and 4% on average across all organizations, all time, all populations. This group over all the time we tracked them as a cohort had an average click rate in excess of 12%. And so what I find interesting about that is that while fear as a motivator is the lowest response rate, back to the survey, why did I click? People that gave that answer, they click a lot. And what that tells us about organizational culture is psychological safety. And this is an area that I think I want to do more work on because the other indicator that's interesting about psychological safety from this group is one of the cool new metrics it's not talked about enough is called post click report rate. This is the in between behavior, between what Michael was talking about as I clicked and I report it. This is something new. And it first twigged in my mind in 2024 with the Verizon data breach report talked about this. So post click report rate is the percentage of people after a phishing test that go back and report it anyway. And we're going to make a broad statement that this is in some way roughly worthwhile measuring in terms of what they are likely to do in a real behavior. So that's my logical leap. In this fear based group, their post click report rate is 5%. Remember, the average report rate, proactive or all reporting is 28%. The post click report rate across all organizations and all answer types is 10%. They are half as likely to tell you they screwed up. They click because they're afraid they're going to get into trouble and they won't tell you about it. And that tells you about that. Remember that three hour punishment training joke we made? And I wanted to circle back to what our data could be indicating is that do that and you could miss out on someone telling you about when they make a mistake and that is harmful. But the other side of what harm that I want to segue into because I don't want to lose this. There are a number of headlines in media, including as recently as seven hours ago. And these are from reputable media organizations like everything from dark reading to cyber news. The headlines read like this. We've all been wrong. Phishing training doesn't work. Or cybersecurity training doesn't work. People keep clicking phishing links. And this was a major presentation at Black Hat and it came from research from the University of California, San Diego. And there are some Important findings in that research that I think we support. But this massive overstatement is actually causing harm. And the reason why I'm getting increasingly disturbed about this and I'm calling it out is the CISO for a major electronic vehicle manufacturer on LinkedIn said something very close to the effect of. Well, all modern studies show that phishing training doesn't work and it causes actually causes harm. That is factually inaccurate. Those headlines are factually wrong. The research paper itself that it's based on does not support that conclusion. What data comes from that research report says this annual cybersecurity training doesn't move the needle in phishing behaviors. Check. We support that. That the delivery mechanism of someone getting training being presented to them as a post click landing page regardless of the content in there. There's slight differences between static interactive content that the average amount of time that people spend on there according to their research was 30 seconds. Boseron's most recent data. Median time on a post click landing page is 11 seconds. If that is your educational delivery mechanism between 11 and 30 seconds, what a surprise. People don't engage with that material. It's a fundamental flaw in the delivery mechanism.
A
What passes for research at times in this industry is appalling. I'm sorry, you have a lot of convincing to tell me that training doesn't work. Sorry. On anything. Just a thought on that one. That your bullshit meter might just go off on that and a headline that's so convenient that it gets you sort of going. Oh, you see, we don't really need that stuff. Is the type of stuff that your bullshit meter should go off as well. The technical term on that. But I want to go back to this piece that you said is that if you've clicked on something and then you get a time to retraining, that's important. Then you equated that from clicking on and getting to a new page. How does that fit in what you're saying?
C
The old. And this is something that Boseron was actually built around when we first built our product, we use one of the market leading vendors. When I was at the university and I saw one of the big limitations when I looked at that data. This is 2015. And so the only way people got training delivered was that in the moment when they clicked, their web browser would open up. And in that web browser, right in that moment, not in an lms, not in a separate learning system, et cetera, in that web browser browser, they were presented with information. This was a phishing test. Here's what you can learn from this. Some of these iterations might have a video, some of them might have an interactive module. But keep in mind it's presented in the browser window and it happens after you did something and it wasn't in general what you were expecting to happen. And so what happens for a lot of people is at 0 seconds the thing pops up, boot, close the window. There were zero training impact on those individuals in terms of learning from the training material presented because they never read the material. And so when we designed what we did, we actually built a system where they actually got a follow up email. Hey, this thing happened to you. We assigned training to them after the fact and they went back into a separate system and that training wasn't temporal. And we didn't just disappear, make it go away because you closed the browser window. In fact we built a system called Nagbot which love lovingly is turned on and will never let you go until you go back and you do the training. And I'm not saying that training is the as Michael said, anybody that tells you I have the magical solution to all of your cybersecurity needs and all your problems go away. No like but that, that people who took training and different ways of delivering that training and more importantly and I just want to hammer this point different approaches like one of the things that we changed last couple years was using emotional intelligence as a driver based on a whole stream of academic research that teaching people technical indicators of phishing had limitations. Talk to them about the emotional impact and use those things. And we saw benefits from that. Statistically significant benefits from that. So headlines that say all security awareness training is useless because people still click on phishing. Bullshit. Yes, you can still have the best theoretical training delivered the best way with the best wording and we're still going to have based on our research a 3.5% probability of click. Welcome to the human condition. That's still better than doing nothing. And it's irresponsible because one of the other things, and I'm sorry, this is my soapbox so I'm just going to run with this for a second. One of the things that's really pissed me off is that I have anecdotal data and one of my the global largest leader in this space and I believe them on this because I've seen enough of this repeated in experiments that the baseline click rate if your organization does not do phishing tests is somewhere in the range of 30% plus. And there's been numerous blind phishing done like it's a high baseline rate. And what none of these headlines or none of the studies, particularly this UCSD study talk about is the implicit benefit of phishing itself as an experience in slowing people down. What do I mean by that? From a science perspective, there's something in sociology referred to as the Hawthorne effect. And it came when people realized that when people saw they were being studied on the factory floor, their behavior changed. In the case of the Hawthorne effect, people worked harder, they changed because of that. That's the key thing. People have to know that this happens. It's a fair, but B, if I'm making the argument that Hawthorne effect is a thing they got to know, their behavior is being observed, doing it lowers the risk. And none of these studies have disproven that, period. Full stop, absolute garbage. Meet you on the debate street any day of the week on this issue. It's really hard to prove the absence of something. Right. Like that's. That is what it is.
A
That's never been questioned. As a business logic, if you pay attention to something, it will change. So that's just. We all know that.
C
Yeah. So that's where I get so angry about this, particularly. And I know I'm asking for it because like Christian and I at UCSDA like we are cats and dogs on LinkedIn. And I think it's the height of irresponsibility to allow media to run with this story and social media to do their thing and not say actually louder. Our research shows this message of remedial training delivered this way did not drive the kind of results we wanted to see. And there are reasons for that. Like not enough people engage with it. Not that this overall approach to getting people to experience phishing doesn't have some benefit. They don't have the receipts to prove that. And it's dangerous to claim that on that side and an absence of absent that, like I think, you know, first of all, do no harm.
A
Michael, this whole thing, if we can draw you into it. Because I look at this stuff and I just. Like I said, I think that most of what passes for technological research is, especially if there's a screaming headline out of it should be. I'll be nicer. I'll say it should be suspect at least. How can we start to think as business people to understand what we should be paying attention to versus what we shouldn't be.
B
The first thing to do if you're going to get into scholarly work is to give up the notion of absolute truth. Truth is a percentage and it changes over time. This is more true now than it was before. With a change in the evidence, your perspective on what is true must change. That is just part of being a scholar and doing research. Otherwise there's no point. You might just decide that something is true and then nobody can change your mind. The second thing to keep in mind is that science is really, really hard. Every piece of research wants to be terrible for some reason. If it can go wrong, it will go wrong. And you have to fight tooth and nail to pull whatever research you're doing back from the brink of absolute garbage. And this is the hardest kind of research, in my opinion. Some people like to say that there are hard sciences and then there are the social sciences. For me, the social sciences are harder because your subject matter changes over time and it can be influenced by the results of your work. So you end up with a feedback loop that can change the ability of other people to replicate your studies, which may well be the case with this paper. So it's very difficult because people can read the research and people change over time. So it's very hard to replicate any kind of study. So this is to say that this stuff is really hard to do well. And there are some things in the research that David's talking about that are done quite well. It's a good sample size, it's sampled well. The method is good. The method of testing is good. The analysis in the results section is also quite good. So they've done a lot of things right. And what they've shown is that doing training once a year has limited impact over time. If you're measuring people's behavior in terms of phishing. So that's one thing that they can very clearly say, and they've said it, and it's well said in the paper. Very good piece of research in that respect. There are things that shouldn't be taken out of this paper because they weren't proven in this paper. So this paper did no analysis of cost benefit. So they only looked at one benefit, which was a reduction in click rate. They didn't look at any other kinds of benefit that come from running training in terms of engagement with the security process, in terms of other types of security outcomes, business, email compromise, general fraud, password compliance. All of the other things that go into a cyber security training program that aren't able to be assessed merely by looking at phishing clicks. The other thing they didn't analyze is how much was actually being spent. So if we look at Canada, for example, and if we look at 2019, the Canada Statistics Small Business Survey for Cybersecurity. About 4% on average of cybersecurity expenditures goes toward training. It would be a hard sell for me if you could tell me that you can do more with that 4% dedicated back to a more technical solution than it is currently doing in terms of engagement, in terms of cultural change, in terms of bringing people on board. Even if what they're saying is true, and I'm not doubting their outcomes, it's not to say that phishing training doesn't work. It's to say that phishing training delivered in this one place, in this one organization, delivered in this way, doesn't give a behavioral outcome in terms of phishing clicks that would be worthwhile.
A
Yeah, I think one of the problems I have, and it's a business problem and a modern media problem, I'm not even going to call them, it's journalism at this point, because I don't think it rises to that level. The question you have to ask yourself, and I think as a researcher or as a business person, is before I shoot my mouth off or before I say something, I should ask myself, how much harm could I cause if I'm wrong? And that's what offends me in this particular case, is that if you're going to say something like all phishing training is useless, I want receipts, I want data before you say something like that. Because the clickbait headline is killing journalism. And it's a huge problem that we have, and it's just plain irresponsible with you on that.
B
I mean, I can only assume, and it's a big problem in science, that the amount of publishing that's happening, the amount of papers that are out there, the volume, volume is tremendous. So I can only assume that this was an effort to get noticed. The only problem is it worked. Now they are being noticed, and they're being noticed by people who don't seem to have really dug into the meat of the paper. And scientific papers are hard to read. And outside of your own discipline, if you're, say, from a physics or an engineering discipline and you come across a paper like this, that is more social science, you might not not necessarily know how to interpret the results as they're presented. But letting these things escape while they're still being discussed in the scientific concept, because this is not a done deal, as you said at the start, this is an extraordinary claim. It requires extraordinary proof. So there should be not just one paper, there should be multiple papers that dig into this and there are more out There, I'm not saying that they're alone, but there are other papers that suggest the opposite. So there needs to be more discussion within the scientific community. And I mean, perhaps the solution is for more scientists to talk with journalists, but the other side of that is, I mean, a lot of scientists have been burnt by journalists before who they've spoken to that they've said something entirely reasonable and then they've been taken slightly out of context or misrepresented. So this is a difficult thing that we're struggling with our current media environments. I would suggest that the people who are latching onto this because it aligns with their general philosophy or worst case, their business needs should really just be a little bit more cautious because the next big paper that comes out could say the opposite and they could end up with egg on their face.
C
And the thing that makes me, we're heading into the territory of, remember all the studies that would come out and say, red wine is good for your heart, red wine is bad for your heart. Like it'd be all over the place. And then the worst part is we're going to leave just people so confused and the progress we have made to say it is important to work, the human side of this equation is being lost and some of the researchers tip their hands. Like I say, some of these are pre publication things that media is now covering. They're not even peer reviewed yet and they're out there and they haven't even had the benefit of that process. So as Michael reminded me, it's probably not fair to beat up on those yet because they haven't been beaten up by their peers to continue the discussion.
A
Oh, I'm going to disagree with you. You put something out, you put something out in the world, you take accountability for it. Peer review is not going to save you from this. I think if we were going to treat it as a serious piece of, if we were questioning them as serious academic research, they could hide and say, or go say, look, I put, this is an early paper, it's an early discussion, but you have a responsibility for what you say. I mean, that's just, that's just grown up land, right?
C
And I think that's the part, like, I'm not as hard at that as I am now with my frustrations on the one particular healthcare paper being overplayed. Because as Michael said, there was another study done by the University of Zurich that was done a few years prior, Same enterprise scale, 15 months, one of the longest papers of its time. It also found problems with this post click landing page delivery methodology. But one thing it had that UCSD completely misses. And you read the paper say we didn't have a method to track reporting. So we didn't. Not good enough. Not even close. If you're going to make these kind of claims about the lack of benefits of doing this thing, or as one research says, the juice isn't worth the squeeze without doing a cost benefit analysis, man, that pisses me off again, I will move where the science is on this and when we say the science, the discussion on this, but it's not been proven. And the last thing I'll end on is that a common CISO criticism of phishing simulations is it erodes trust between the individuals and the organization. Our survey data so far shows that 70% of people said I learned from the experience. Another independent peer reviewed qualitative survey based paper in December 2024 showed that 86.9% and the survey size was around 750,000 people. So, you know, interesting. It's not just, you know, a small organization. 86.9% actually gave very positive or positive answers to the benefits of phishing simulation. And the title of the paper is I'm being cautious around touching the hot hob, which I had to Google is the British word for stove, because people responded, said, hey, just reminded me, oh yeah, I can do this. I need to stay vigilant. And so the conclusion that this only causes harm is not backed by the evidence. The conclusion that, that it unnecessarily erodes trust if you do it poorly. Sure. Like Taco Bell, you know, if you fire a bunch of people and then send out a holiday bonus fish the next in a time recent period. Is that kind of shitty? Yeah. So you do need to consider the ethical and moral constraints around some of the stuff, but as a whole, to write off the entire activity. Well, you just gave the criminal kids a win. Great. Can we not give them any more advantages over us? Really? And then the last part is Michael raised the point about like as if the money, the paltry 1 to 4% of spend in cybersecurity that goes towards the human side could be better spent elsewhere. When I stop and think about this and the people are like for Fido keys, well, those can be defeated too, because convenience and security and all these other fun stuff don't throw baby and bathwater out. Because of one study.
A
One of the most terrible ways to make an argument either in business or in science is to compare two things that might both be good.
C
Yeah.
A
And without an objective way of saying, if I can't spend on one, then I'll spend on the other.
C
And the reality is the minute you.
A
Start to put this against one thing and say, well if we had this one to 4%, we could spend it on something else, then great, then let's see the numbers. But maybe you should increase your spending.
C
Or maybe, and I guess this is my point because yes, there's a place for good FIDO 2 technology as part of defense in depth remains a valid strategy and that I'm never going to be on the show saying get rid of your firewall, get rid of your email filter, turn off the EDR and spend 100% of your budget on human stuff. Not going to happen. We are part of a layered approach and that I cannot subscribe to the premise that educating people, helping them stay vigilant and helping them be successful and secure is a bad thing and not worth a percentage of the spend on Defense in depth.
A
Yeah. So just to wrap up on what we've come up through and we've gone through a lot of stuff, we've talked about some of the research that you've done, we've talked about some of the findings that are there, we've talked about some of the conflicts or at least the friction between academic research and the what I'll call, I don't clickbait research. And being able to tell the difference, I think is a huge piece because we're all influenced by this and this is the thing that the reason why we have to tackle this, we're all influenced by these things that we hear. And sometimes when we hear what we want to hear, we should be asking ourselves really serious questions. I just want to wrap this up in terms of what people can take away from it. Can I just toss back to both you and Michael for a little bit of a wrap up on what you think you want people to take away from the discussion we've had that this.
C
Area needs so much more study and that there are no absolutes in this space that educating people and doing phishing simulations, it should be part of your defense in depth that the jury's not in or out that we should stop doing this, that there is a absolutely right ways to do it in terms of the frequency, the approach, the educational delivery mechanism, the content itself and that that is a iterating space right now. Don't throw baby out with bathwater annual training, not cutting it in terms of sustaining awareness as vigilance, more frequency, consistency and the benefit of people knowing hey, There's a traffic cop out there ready to pull me over in the email. Speeding lane and slowing down is a good thing. Those are my core messages.
B
We've learned that not all cybersecurity education just works, that we have to be a little bit careful about how we implement these things. This is a genuine arena. This is a community of practice, and we need to make sure that we are careful about what are the practices that we're adopting, how we're implementing things, and making sure that we are working to measure and push the field of cyber security awareness education, whatever you want to call it, human risk management, moving this forward in a way that is measurably forward, because as we've shown, you can do this right and you can do it wrong. And there are a lot of factors that are not necessarily intuitive that we should be considering while we're doing this. I would suggest, if you are a professional in this space, that make sure that you're linked with a group where you can compare what you're doing with other people. I mean, I might be biased in this, but make sure that there's some kind of academic researcher within that group so you have someone that you can reach out to when these papers come out. And again, not throwing the baby out with the bath water. There are some excellent things in all of these pieces of research, and they all need to be done, even if they're wrong, just to be contradicted, but having someone who can help you interpret and understand what those are in the wider discussion within the field of cybersecurity education, that will help you go a long way in sort of learning what you should learn and understanding how to interpret these kinds of things when somebody asks you about them. So I'm big on empiricism, and I'm big on making sure that we understand how to move the field forward and not sideways, which requires measurement, which requires these kinds of studies. But it also requires that we have our head screwed on and we're doing what is best practice and not just common practice. And I think working together, working between academia, private practice, even what they're doing in governments around the place, is a good way of making sure that we are adhering to that. And for those that say, you shouldn't trust a vendor, don't trust them, don't trust a vendor, don't trust me either. Don't trust anybody. Trust them to do the things that are in their best interest. David needs to make money. I need to publish papers. Understand that that's what's driving us as motivating aspects. But understand what that means in terms of the things that we're saying. I'm not going to say things that are provably wrong because that's going to harm my ability to put out good science. David's not necessarily going to say things that are going to cost him money and saying that he's absolutely 100% going to solve all of your security problems and he'll guarantee it might be one of those things.
A
Yeah. And I'm going to chip in to say I think there's a difference between thinking critically and thinking cynically.
C
Yes.
A
And I hope at the end of this program that we leave people with the idea we had the discussion so we could help people think critically. Cynical thinking gets you nowhere. Critical thinking, on the other hand, forces you to test ideas, maybe to change your mind from time to time. Nothing wrong with that. When we say that there's a 30% click rate in the average organization that does no training, that means things are falling through our technical defenses. We have claimed for ages that we need layers. We need layered defenses. The second thing that we've claimed is that people are our greatest defense. So let's stop talking about people as our greatest weakness. Let's talk about people as our greatest defense and how we can make them better at doing that. My guests have been Michael Joyce, who's from the Human Centric Cybersecurity Partnership at the University of Montreal, PhD candidate there as well, and David Shipley from Boseron Security. We thank you for listening. I'm your host, Jim Love. We'll talk to you next week. Like to thank Meter for their support in bringing you this podcast. Meter delivers full stack networking infrastructure, wired, wireless and cellular to leading enterprises. Working with their partners, Meter designs, deploys and manages everything required to get performant, reliable and secure connectivity. They design the hardware, the firmware, build the software, manage deployments and run support. It's a single integrated solution that scales from branch offices to warehouses to and large campuses to data centers. Book a demo@meter.com CST that's me. T E R.com CST.
Host: Jim Love
Guests: Michael Joyce (CEO, Human Centric Cybersecurity Partnership, U Montreal), David Shipley (CEO, Beauceron Security)
Date: January 3, 2026
This episode centers on the intersection of academic cybersecurity research and real-world practices in cyber awareness and training. Jim Love hosts a candid, data-driven discussion with Michael Joyce and David Shipley about what actually works in human-centric cybersecurity defense, the impact and decay of cybersecurity awareness, the effectiveness (and pitfalls) of training, and the dangers of over-simplified, headline-driven industry conclusions. The dialogue is anchored in recently published large-scale research from Beauceron Security and the University of Montreal, providing rare clarity on how often—and how—organizations should train their employees, what motivates users to report or ignore threats, and how to evaluate research responsibly.
[02:15] Michael Joyce explains UMontreal’s Human Centric Cybersecurity Partnership:
[03:46] Host Jim Love laments the lack of recognition for Canadian university research, urging pride and awareness of academic contribution to global cybersecurity.
[07:24-08:54] David Shipley discusses partnering with independent academic researchers:
Jim Love highlights the common pitfall of commercial bias in corporate research, emphasizing the importance of independent review and humility.
[13:08] Joyce and Shipley explain their access to a historic dataset of 700+ Canadian organizations and 250,000 people.
[14:37] Key Findings:
Memorable Quote:
“We can say that during the month of October... there is a decrease in the clicks on phishing simulation emails of above 10%... But in phishing simulation reporting, there is a decrease. So... it might speak to perhaps security fatigue.”
—Michael Joyce ([14:37])
[19:06-23:15] David Shipley discusses findings on simulation frequency:
[21:40] Shipley’s story: Woman caught by a Taco Bell phishing sim sent for three hours of “Taco Bell Alcatraz” training—used as a caution against punitive or excessive measures.
“No, guys, this is not the way to do this... You know, there's a way to do phishing well, there are a lot of ways to do it horribly.” —David Shipley
[23:49-28:36] Michael Joyce deconstructs “awareness”:
Decay Curve:
Key Insight: Most employees do “nothing” calorically (most efficient), followed by deleting or reporting. Not reporting isn’t apathy—sometimes it’s efficiency.
[30:14-34:48] Shipley: Reporting is a benevolent behavior with a cost. Motivation to report must be sustained with feedback loops that make it worth the user’s time.
“Reporting can help protect everybody... But the other deeper opportunity is that reporting is more beneficial to you than not reporting... So we got to make it worth more.” —David Shipley
[35:17] Joyce: Reporting is distinct behavior from clicking; motivations, decay curves, and implications all differ.
[45:44-50:00] New Beauceron survey of 4,500+ clickers (across 211 orgs):
“(Rushing and memory lapses) means that 38% of clicks, I hypothesize, have more to do with the how we work with email than any of the inherent knowledge...” —David Shipley
The referenced academic study in fact only proved that annual training alone is insufficient.
Most industry clickbait headlines misrepresent findings, creating harmful narratives.
“What passes for research at times in this industry is appalling... If you’re going to say something like all phishing training is useless, I want receipts.” —Jim Love ([69:47])
Shipley notes baseline click rates in untrained orgs are ~30%—far higher than among those with any meaningful program.
Training effectiveness, delivery mechanisms, feedback loops, and frequency all affect outcomes—but the worst approach is doing nothing.
On vigilance and decay:
“If you train somebody on something once and you provide them awareness, it is not a software patch. You cannot expect them to maintain that state of awareness.”
—Michael Joyce ([00:35, 24:25])
On data-driven results:
“Awareness does something. The idea that awareness does nothing—we have data to suggest that it changes how companies interact with their employees and how employees interact with the phishing simulations... but it seems to have a temporal limitation.”
—Michael Joyce ([43:59])
On optimal frequency:
“Annual cybersecurity training as our data has shown... will not move the needle in an impactful way on phishing risk. ...Once a year is not enough.”
—David Shipley ([41:28])
On misleading research:
“What passes for research at times in this industry is appalling. …[If] you’re going to say something like all phishing training is useless, I want receipts, I want data before you say something like that.”
—Jim Love ([69:47])
On layered defenses:
“We have claimed for ages that we need layers. We need layered defenses. The second thing that we've claimed is that people are our greatest defense. So let's stop talking about people as our greatest weakness. Let's talk about people as our greatest defense.”
—Jim Love ([81:56])
| Topic/Segment | Timestamp | |-------------------------------------------------|-----------------------| | Human-centric research at UMontreal | [02:15] – [03:46] | | People vs. technology/culture in security | [04:14] – [06:18] | | Ethics and value of academic–corporate collab | [07:24] – [08:54] | | Dataset overview, Cyber Awareness Month impact | [13:08] – [14:37] | | Phishing sim frequency Goldilocks zone | [19:06] – [23:15] | | Awareness as state, decay over time | [23:49] – [28:36] | | Reporting motivation & feedback loops | [30:14] – [35:17] | | Why people click — survey & insights | [45:44] – [50:00] | | Black Hat, headlines controversy | [55:33] – [66:20] | | Conclusion: defense in depth, critical thinking | [81:49] – [end] |
The episode makes an eloquent case for interdisciplinary humility, the value of empiricism, and the importance of critical (not cynical) reading of both research and media on cybersecurity awareness. The findings are clear: neither too much nor too little training works. The “Goldilocks solution” is a measured, data-driven approach—one that treats users as assets, not liabilities, and centers on practical, psychologically-informed, and constantly-refined programs.
[Prepared for those who want actionable insights without sitting through the full episode—quotes and context provided for credibility and further listening.]