
Loading summary
Host/Announcer
Best advice for a ciso, Go.
Julie Meyerholtz
My best advice for a CISO is to focus on the business. Understand that we're only there because we're running a business and we need to manage risks for that business. Look at cybersecurity from the P and L perspective and protect what's most important to that balance sheet and accept risk in other places.
Host/Announcer
It's time to begin the CISO Series Podcast.
David Spark
Welcome to the CISO Series Podcast. My name is David Svark, the producer of the CISO series and joining me is my co host since day one of six and a half years ago. It is Mike Johnson. Mike, say hello to the audience.
Mike Johnson
Hello audience of six and a half years. It's amazing how time flies actually when
David Spark
people are listening to this a little bit longer though.
Mike Johnson
I should mention hello to the listening audience of a little bit longer than six and a half years.
David Spark
Exactly. Our sponsor for today's episode who's been a sponsor for a few years with us, we love them. It is Vanta Automate Compliance. Manage Risk and Accelerate Trust with AI. You're going to learn more about that a little bit later in the show. But first, Mike, this episode is dropping the Tuesday of rsa. Tuesday I think is the day that they open up the show floor late. They do the bar crawl. So this is kind of probably one of the hotter days. I guess Tuesday or Wednesday is a hotter day. So what we're going to do is we're going to make some predictions about rsa. Your first prediction, let's hear it.
Mike Johnson
Nothing about AI.
David Spark
I'll be surprised if I hear one person mention it.
Mike Johnson
Shocked. Shocked.
David Spark
I'll be shocked if it comes up.
Mike Johnson
It's going to be all about how we need to worry about mainframe security.
David Spark
Mainframe security.
Mike Johnson
There'll be a lot of discussion about COBOL and how you need to make sure you don't have SQL injections in your cobol. I'm actually predicting we'll have an oday in a mainframe from some COBOL code with a SQL injection with a cross site scripting vector to get there.
David Spark
I would put a lot of money down on that. I would totally see that. Let me say something. I just found something out. My wife, who's younger than me, took assembly programming language in college and I'm like, what? How is that possible? Because it wasn't available when I was in school. Because that was antiquated. I mean, I took Fortran to give you an idea of how old I am.
Mike Johnson
Hey, I took Fortran. I did not know you Took Fortran.
David Spark
I did take Fortran. But did you take assembly?
Mike Johnson
I did, actually.
David Spark
Did you really?
Mike Johnson
So the Motorola 68K chip, we were learning assembly for that.
David Spark
Let's bring it back.
Mike Johnson
Yes, let's bring back assembly. Down with rust. Let's go with assembly.
David Spark
By the way, here's the coolest thing I've seen at past RSAs, they've had the Enigma machine there. Have you ever gotten your hands on an Enigma and played with it? It's pretty darn cool.
Mike Johnson
It is very cool. I highly recommend. If anybody gets a chance, that's a really good opportunity.
David Spark
I remember when you were at Lyft, you invited the guy who does a whole lecture on the Enigma and has people come up and play with it.
Mike Johnson
Exactly. He does. I haven't caught up with him in a while, but he would do traveling shows where he would bring an Enigma and he would talk about the history and really the place of Enigma in computer security.
David Spark
And by the way, explain what the Enigma is for those people who don't know.
Mike Johnson
It was really one of the early encryption decryption that the Nazis were using systems the Nazis were using it in World War II to hide the messages that they were transmitting over the wires, over the air. And it was also famously one of the first instances of code breaking.
David Spark
Right.
Mike Johnson
The British found a way to break the encryption of the indignation that the Germans thought was impossible.
David Spark
Well, it seems like when you first hear how it works, it does seem impossible by those standards back then. But then you realize the simple missteps that they did because every sort of message would have a sign off of Heil Hitler. And so that was sort of like something that was standard in every message they could figure out.
Mike Johnson
Yeah. The fascinating tale is really about how the encryption was broken. And a lot of it was exactly what you were just saying, David, of they kept reusing a known plaintext and it was that known plaintext that we used to actually break the encryption.
David Spark
Yeah, it's totally fascinating story. All right. We're not going to be doing anything about Cobol assembly or the Enigma on this episode at this point. That's where the conversation ends. Let's bring on our guest. Thrilled to have her here, first time she's ever been with us and very excited to have her as part of today's episode. You heard her at the top of the show. It is the CISO of Brunswick Corporation, Julie Meyerholtz. Julie, thank you so much for joining us.
Julie Meyerholtz
So excited to be here. Thanks for having me.
Host/Announcer
Got a better answer than we're trying.
David Spark
Let me translate. Working as intended for you. We knew we shipped it anyway. Your problem now. Now that's Rock Lambros of Rock Cyber calling out the Google, Microsoft and Amazon for offering default cloud configurations that let low privileged users hijack high privilege service agents. Researchers find privilege escalation pass vendors shrug and call it a feature. Rock argues cloud providers have weaponized shared responsibility as a shield for shipping insecure defaults. And he's not buying the managed means secure narrative. So want a hyperscaler market, secure by design, but ships with privilege escalation baked into the defaults. What does shared responsibility mean? Mike, are CISO stuck auditing every service identity because we can't trust the foundation? And I just want to go back to this whole secure by design or architected secure theory. We always hear this, but this kind of breaks it, doesn't it?
Mike Johnson
First, I think we need to go back to what does the shared responsibility model mean? Yes, and it's a very important concept because fundamentally it's supposed to tell you what the cloud provider is taking responsibility for and what you are. You can't really work on the physical security of that server that is hosting your workload at some data center somewhere around the world. That's one of the things that they take responsibility for. That's their part of the shared responsibility. At the same time, they are shipping software that is designed to be flexible. You can make it do whatever you want. You can misconfigure it from a security perspective, or you could configure it in a way that is critical for your business and is to somebody else insecure, because there's all these things interact. So it really is important to understand conceptually some of it is your responsibility and some of it is theirs. But there was also a time, and I think things have gotten better. There's always opportunity for improvement. But there was a time where there were a lot of insecure defaults. There were so many breaches due to misconfigured S3 buckets. It was a daily thing. Many years ago, Amazon got better. They figured out that that was not acceptable. And other vendors have learned from that lesson. But at the end of the day, some of it is your responsibility, some of it is the vendor's responsibility. And the important thing is for you to understand where their responsibilities end and where yours begin, and therefore what you can influence, what you can actually control.
David Spark
Excellent. All right. Your theory on shared responsibility and do you push back at the sort of maybe not so secure by default design of some of these tools.
Julie Meyerholtz
Julie no, I entirely agree with what he just said. The systems are developed to be flexible, to enable each business to do with them what they need to do. And so we as a business leveraging those third party clouds need to take accountability for making sure we're designing our security controls within that cloud, within the parameters of what we need for what we're doing. Privilege escalation may be something that a company wants to have and may enable them to do what they need to do for their business. But if I were to do that, I would segment that into a separate area that's remote from my corporate network. And if it does get impacted, it's not going to be a huge impact to the rest of the organization. And I think we as companies need to take accountability for owning security. It's not a 5050 thing. There are certain things that the cloud providers will provide, but we need to make sure we have the right controls in place for what we're trying to do and for our corporation and our security profile.
Host/Announcer
Unexpected outcomes or failures
David Spark
quot Everyone advocates for best practices until they hit production, end quote was the thesis of a recent cybersecurity subreddit post. The community was quick to share cybersecurity ideals that didn't survive first contact with the real world. For example, complex password requirements with fourth 90 day rotations that NIST abandoned years ago, or compliance programs that check boxes without improving security, or DLP implementations that crushed performance for minimal gain. One practitioner captured the real problem. Most security teams would benefit from having an honest first principles conversation about what tangible outcomes each of these programs has produced in the last 12 to 24 months. That's a good point. So how do you go about, Julie, vetting out and discontinuing security measures that simply aren't doing anything anymore except wasting money in cycles for your staff? And by the way, I should mention we brought this up before. There are things that one time did make sense and then stop making sense so they kind of creep up on you and you don't really know. Oh, this is the day we need to stop. So like, how do you figure this out? Julie?
Julie Meyerholtz
Yeah, it's not an easy one. And I can give you a prime example. I've always said compliance is not security, right?
David Spark
That's a theme we've had many times too. Yes.
Julie Meyerholtz
Yeah, and when I had to be NIST compliant at a previous company, I actually had to take my security profile backwards because I couldn't use things like CrowdStrike because the regulations prevented me from using Things in the cloud where other people had access to. And so you really need to look at what's best for the business and how to enable your business to do what it needs to do. Have you ever seen somebody with an RF gun trying to type in a 20 character password? And it's just not practical, right? So looking at the controls that you have in place and the risk associated with them. So let's reduce the amount, the length of the password for those individuals to make it easier for them. But the domain admins, maybe we do keep it more complex or better protected. So it's really taking a step back and looking at what are we trying to do here, how do we enable the business to do what they need to do at the same time of reducing risk around the highest risk items?
David Spark
And is there kind of an internal audit experience you do or is it just like one of the things one of our other co hosts, Andy Ellis says is just asking the staff, what is this thing that you don't understand why we're still doing? Because it doesn't make sense. Do you do this?
Julie Meyerholtz
I do that. And I do find myself fighting audit on a regular basis just because they still look at. Well, your passwords need to be complex because that's what NIST said and those type of things. So it's convincing people to change the mindset and understanding why are we doing this? Is it because we've always done it this way? Is it working for the business? And how do we make things better and faster for the business?
David Spark
All philosophies we lean on strongly. All right, Mike, I throw this to you. How do you sort of catch the things that are creeping up on you as slowly becoming useless? I guess.
Mike Johnson
I mean, there's the story of the slowly boiling the frog, right, where it's a little bit here, a little bit there, and the next thing you know, you're having to roll back a decent EDR because you have to be compliant. From my perspective, years ago I was introduced to the. There's actually a concept called the beginner's mind. And it's the idea that if you don't have knowledge of something and you come in and you start asking questions, you're going to bring different perspectives. There was mention of first principles. What are even the first principles of security anymore? I don't think that's a useful approach. But periodically looking around and saying, hey, why do we keep doing this? Or go and ask your newest team member, hey, you're here. You started yesterday. As you're onboarding the next 30, 60, 90 days. What are some things that you wonder why we do them? And let's have a conversation about them.
David Spark
Have you said that to new hires? Because that is actually a good question.
Mike Johnson
Oh, absolutely. Yeah. That's something that I basically write into the onboarding document for anybody who's at least reporting directly to me.
David Spark
And hold on, I'm going to pause you here. Has one of those new hires said, why are we doing this? And you go, let me look at that. You're right. Have you had a moment like that?
Mike Johnson
Yes, absolutely. And the password one is really the biggest one that always comes up because we have password requirements that I'm not happy with, that we're actively working to fix. And I think that's one of the other things that people need to understand is there's the recognition and the acceptance of the need to make the change, and then there's the actually doing it. And these don't necessarily always align and someone will say, hey, why do we have these particular password requirements? Well, here's how we got here and here's what we're doing about it. And that is something that has come up frequently when we have somebody new joining the team. So, yes, we certainly have gotten feedback from that. Some of it there's a reason, and some of it it's not a good one.
David Spark
But is there. You brought up a thing about the beginner's mind, which I think is great. And I'll bring you back in, Julie. Here is this is a problem I think we all have. When you're mired in it, you don't have that outside view anymore. Is there a way to create that again, Julie?
Julie Meyerholtz
I've done it a couple different ways. One is looking at it from a design thinking perspective. I don't know if you've ever followed the design thinking concep, but bringing people in that don't normally live it every day and having them solve the problem, you get really interesting answers that you probably never would have thought of or perspective that you never would have seen had you not seen it from that angle. But I always challenge my team too, to, like, just because we did it that way yesterday doesn't mean tomorrow is the same day. And so how do we look at each problem uniquely today and solve it for what we know today versus what we did five days ago? And that helps put a little bit different spin on it to say, okay, well, yeah, we needed to do that then because of XYZ reasons, but today is different. So how do we look at it differently and enable us for the future versus just following what we've always done.
David Spark
Mike, let you close it up. How have you been able to create that beginner's mind?
Mike Johnson
Some of it is really just intention, right. Of encouraging folks to ask questions that they might have asked before and got an answer and they walked away. Another concept is learned helplessness. Well, I can't change it, so I'm not going to ask about it. And giving folks the clarity or the clearance to ask questions that they have previously asked before and encourage that and then accept that and do something with it that then creates a feedback loop that other people will then come forward and say, well, what about this? What about this? So really the reacting to feedback is one of the most important things to do with that loop.
David Spark
No, it is not your imagination. Risk and regulation are ramping up and customers now expect proof of security just to do business. And that's why Vanta is a game changer. Vanta automates your compliance process and brings compliance, risk and customer trust together on one AI powered function. So whether you're prepping for a SoC2 or running an enterprise GRC program, Vanta keeps you secure and keeps your deals moving. Now, companies like Ramp and Ryderspend, 82% less time on audits with Vanta, that's not just faster compliance, it's more time for your growth. You can get started on all of this if you go to their website, vanta.comciso and do me a favor, add that CISO in there. Easy way to let Vanta know you heard about them from us. Remember, vanta.com CISO.
Host/Announcer
It's time to play what's Worse.
David Spark
Julie, you know how this game is played. Two horrible scenarios. You have to decide which one is worse. All right, this comes from Oscar Morales of Kalyan IT and Cyber Solutions. All right, here are the two scenarios. You have encrypted data. It's decrypted and exposed and leaked because of the use of quantum technology to break your encryption protocols. So this is very advanced technology that has pulled this off. Or similarly more other advanced technology. You have to deal with advanced malware created through the use of AI that can adapt to security defenses, thus making traditional detection methods useless. Which one is worse?
Julie Meyerholtz
Both of those sound lovely.
Mike Johnson
Yeah, both of these sound like a whole lot of fun.
David Spark
Yes.
Mike Johnson
At the same time, I'm trying to decide which one is more science fiction. And so that's what I'm trying to wrap my mind around.
David Spark
Well, we've definitely seen the second One variations of the second one is showing up these days.
Mike Johnson
Exactly. And that's where I kind of naturally go towards of the thing that is more likely is the thing that is more concerning to me. The quantum attacks maybe happens one day, maybe doesn't. But the other of orchestrating AI agents for attack, we see that today that is something that is either here or very close to being here, and therefore more likely. On the other hand, I think that is also something that we can better defend against. Essentially, that is the endless army of attackers that theoretically we can defend with automation ourselves. Like have my bot defend against your
David Spark
bot kind of situation, rather than this encryption breaking technology you have no clue about.
Mike Johnson
You have no clue about. And technically you can't do anything about. Like, it's already out there. Your data has already been leaked. So I guess I'm talking myself into a loop of technically. If I go into my suspension of disbelief. The first one, the your data has been leaked and there's nothing you can do about it is the worst scenario versus the one that you actually have a chance. It's just going to be hard.
David Spark
All right? You think you got no chance in the first scenario.
Mike Johnson
I mean, the data is already out there. The premise was your data has been leaked. It's already out there. So it's an event that has already occurred.
David Spark
Yeah, well, the malware has already occurred too. So your defenses are not working on it either. So, I mean, it's kind of the same thing in both scenarios.
Mike Johnson
Well, the second one was you're attacked and it's an attack that can continue to evolve.
David Spark
Right. And also, we don't know specifically about the data going.
Mike Johnson
Right.
David Spark
Right. All right. Throw this to you. Julie, do you agree or disagree with Mike here?
Julie Meyerholtz
I disagree.
David Spark
Ah, I love hearing this. Great.
Julie Meyerholtz
Let's see why I would lean more towards the malware that may be manipulating and moving through my environment that can take my business down versus the data leak. Because not all data is created equal. And so it depends on what data they have and what they're throwing out there. And most of what we deal with is not always intellectual property. You might have financial data that'll eventually be released to the public. To me, you're entirely right. Quantum computing will do that, and we have no defenses against it. But I would view keeping the business running and making sure that we're not ransomed or attacked and that business is down as more important than leaked data.
David Spark
I love the way she thought this out. And Mike, I'm surprised you didn't think the way Julie did.
Mike Johnson
No, no, it's all good points and I think like most what's worse, the idea is that this is hard, that neither of these are great. And I really like Julie's points. She made some very good ones.
David Spark
Very good. Julie wins.
Host/Announcer
Is AI going to help us or hurt us?
David Spark
Third party risk management was already a Gordian knot for CISOs. And AI is about to make it harder. And actually we've made a reference to it in the what's worse scenario. That's because AI tools don't add one vendor, they add vendor stack, argued Chris Matthews of Prezi. That simple AI assistant often includes the app vendor, the underlying model, provider integration connectors for email and docs vector databases, hosting and logging. Your third party risk footprint explodes even when the product looks straightforward. The problem is traditional TPRM third party risk management is optimized for static SAS risks like SOC2 compliance and point in time assessments. But AI demands different questions. What data flows through the model? What gets logged and retained? Can your data be used for training or shared with sub processors? So, Julie, I'll start with you. What's the blast radius if a connector gets prompt injected? Before you automate your SOC with AI, what questions do you need answered up front? So this really I'm asking about the questions you're asking. What evidence do you demand along the way as well to know that it's still staying in tune?
Julie Meyerholtz
Yeah. So AI definitely makes our lives a lot harder, especially from the business perspective, because anybody can turn on AI within the business and whether or not you know that that's happening is a huge risk. 100% or third party risk management processes do not really account for AI and how AI is changing in all the different layers of AI and what data is going into AI and how we're managing it from a SOC perspective and leveraging AI. I want to know where that data is being stored. Is it in my environment, is it processing within our closed doors or is it being shared with third parties? Is it being leveraged to train models? It's hard to understand how they're leveraging our data without having deep dives with the companies, understanding where that data is stored and how it's being leveraged to know whether or not we're safely using those AI tools.
David Spark
All right, Mike, your take on this. And essentially it's the questions you need to be asking to understand what digging this AI is doing. Like, how far down a rabbit hole is this thing going?
Mike Johnson
I'm curious now, several months into the future of when our Listeners are listening to this versus where we're discussing it today. Things are going to change. Like, that's how quickly things are moving here. And we have a lot of uncertainty that we have to think about with AI. And at the same time, I literally today was listening to a fellow CISO where they were talking about how they don't want to block and just wholesale block AI tools because they might have business benefit and as much of a risk or maybe even more of the security sides of AI. What are the business opportunities that are being left on the table? Because your AI security perspective might say, well, we just can't do this. That's what is sort of unique about AI versus some of the previous risks, if you want to call it that, that we've dealt with, is you really have to look at it from multiple perspectives. As for the questions to ask, a lot of it is what Julie covered, what is the data at risk? Where is it going? Where does it live? Where is it processed? How is it ingested? And a lot of those are not unusual. We've asked those questions before, like, it's not that different than cloud security from that perspective. But what happens when that data goes into that model? Is the model learning from it? That's something new. That's something that we haven't dealt with before. We've talked about the shared responsibility model. You have to then trust that your provider is going to do what they say they're going to with the data. So there's an amount of, is this a vendor I can trust? That you have to ask yourself, can I trust what they're telling me? It's a very different world, and I wish I could give our audience the perfect answer right here. But the reality is we're kind of all figuring it out as we go.
David Spark
Yeah. And my feeling is being that these LLMs are crawling. More and more data more and more is going to reveal itself and make us more and more confused, I think, is what's going to happen. It's like it's not going to get better. It's like this Gordian knot concept we talked about. It's just like, wait, my brain just can't absorb this. I mean, Julie, do you feel this? This is going beyond what our mental
Julie Meyerholtz
capacity is 100%, and it changes every day. I mean, we're just talking about how AI, if you asked us to create a picture six months ago, it would create a human being with hands, with feet.
David Spark
Right.
Julie Meyerholtz
And now today, it can do it almost perfectly. So it's learning at speeds that we just can't keep up with? And how do we manage the risk associated with that when it's moving faster than we can really comprehend?
David Spark
And this, by the way, it's a good point. You brought that up because when AI came about and we would see those original images, they're so silly. But the classic example is the Will Smith eating spaghetti video, which looked bizarre in the first one, but then they created another one that looks, oh, that looks pretty real now. Like they would mock the first but my comment was like, but look at how fast it's getting better. That's the thing. That's crazy and scary all at the same time.
Julie Meyerholtz
Yeah, I 100% agree. It's learning faster than we are.
David Spark
Yes, I have never learned this fast at all. Coming up next, Once you can see trends in your exposures, you'll be able to predict tomorrow's risk.
Host/Announcer
Today's exposure management tip is sponsored by Qualys Foreign.
David Spark
High profile SAS breaches the affected organization discovered the same root causes OAuth app misuse, excessive permissions, and poor offboarding. Often these exposures were fixed each time, but the pattern wasn't addressed. This meant that attackers didn't need new techniques, they just waited for the same exposure to reappear. Since the organizations focused on fixing a set of recurring symptoms rather than the underlying environmental cause, the failure wasn't in the response, it was in not learning from exposure trends. The real power of exposure management isn't about fixing today's issues faster, but in preventing tomorrow's issues from being created at all. Mature programs analyze exposure trends over time. Recurring misconfigurations, repeated identity mistakes, controls that fail in the same way. Over and over, these patterns reveal where architecture, process, or tooling is broken. When organizations use exposure data to tune guardrails and defaults, they reduce risk before it appears. That's truly where exposure management shifts from reactive defense to predictive control.
Host/Announcer
Want to go beyond exposure visibility and actually reduce risk? Find out how by visiting qualys.com roc. Does shaming improve security?
David Spark
Try Hack Me's advent of Cyber 2025 initially featured 18 creators, zero women. And this is this is something Josh Mason of Snack rightly called it out on LinkedIn, spending less than an hour finding accomplished women creators who could have been perfect for this list, from the Def CON Black Badge winners to educators with millions of followers. But this isn't just a name and shame story. Try Hack Me responded now. The current lineup now shows at least 20 of the 38 creators are women. Something got Them to change. Now, possibly this call out did it, maybe others. But as security leaders, you're making similar choices about who presents at your events, who you partner with for training, which vendors get attention. So when someone points out a blind spot in your diversity approach, speaker lineups, vendor relationships, educational partnerships, how do you respond? And I will say, just speaking for the CISO series, we've had people do this to us, and we actually thank them for it or we tell them about, like, this is something we're working on, and it's actually quite difficult sometimes. But I'll ask you, Julie, do you get defensive? Do you fix it? How do you build processes that prevent these gaps from happening? Because, look, we all come in with biases, whether we like it or not. It's just how we live where we grew up. Tell me what's been your experience?
Julie Meyerholtz
So I had a leader a long time ago. Share with me. Feedback's a gift. It's a gift that somebody shared something with you. What you choose to do with it and what you choose to do about it is up to you. You can choose to modify, you can choose to take it and say thank you and just keep moving forward. So I think any feedback is always a gift because at least you know something that you may have not known before.
David Spark
Yes.
Julie Meyerholtz
The way I try to handle the feedback and trying to avoid the blind spots is I try to build diverse leadership teams, at least work with divers groups of people and collaborate together as we're building, whatever we're trying to do, so we can get everybody's point of view. And, hey, did you think about this? Did you think about that? Are we representing ourselves in the way that we want to represent? Because whenever you're publicly going out, you're representing your brand. And is that the brand that you want people to see for you? Is that who you really, truly are and who your company is based on what you're showing? And when you bring a line of 18 men to the panel with zero women, it's showing. Hey, we've got a lot of great people that are really smart, but we have no diversity.
David Spark
Well, and I will throw this out. I go to the Black Hat Conference, and I also hear companies tout how many women that they've been hiring. If you go to the Black Hat Conference, you wouldn't know that because it is truly 1 out of 100 attendees is female at that event. It is overwhelmingly male. Mike, I agree with that. That's a great line about feedback is a gift. I totally agree, 100%. I'm assuming you've had blind spots pointed out. How do you react to it?
Mike Johnson
The first is, again, what Julie said. Feedback is a gift, and you have to recognize that. You have to then encourage like, okay, tell me more. How can I learn from this feedback, not just take the feedback and then drop it on the floor? Again, as Julie said, it's what you do with it that matters. But learning more, asking questions, being curious, can then lead you down a path of better understanding and really getting into the particular issue. It's not, hey, this one thing is a problem, but there might be a root of an issue, like, you really do have this blind spot. You really do need to understand your biases and actively work against them and actively recognize them so that you can change and do better. But a lot of it just comes down to when you get that feedback. Hey, you've got this blind spot. You're not representing the community appropriately. You're not representing your brand. You're not representing really what it is that you want to project. Take that feedback and acknowledge, learn. And then maybe there's some policies that you can update to the point of how do you prevent these gaps from happening? You can take that feedback and then update a policy that says, well, here's what we're going to do next time, and here's how we're going to handle the next panel or the next presentation, and then that gives you the opportunity to do better. Sometimes it's really just a matter of having the feedback reflecting and then changing course.
David Spark
And I will throw an ad to this. You were saying it was a gift and what you said, and we have received feedback as well. I thank them for the feedback to let them know that how valuable. Because if you tell them that you're pissed off or annoyed that they told it to you, well, you think they're gonna bring it up again? No, they're gonna ignore you, they're gonna avoid you. And so now you won't know what your blind spots are at all. It's very important. It sometimes hurts to hear it, I know that. But it's the way we get better. So I'm making the call out to the fans of the CISO series, if you wanna tell us something that's ugly about us, we welcome criticism. We totally welcome it, and we would appreciate it. We would see it as a gift. Also, if you wanna compliment us, we'll take that too. Right, Mike?
Mike Johnson
Especially you, David.
David Spark
I know, as I've said many, many times on this show, I have a high tolerance for compliments all right, that brings us to the very end of the show. Julie, you were fantastic. Thank you so, so much for coming on the show with us. I'm going to let you have the very last word here. I want to thank our sponsor and that would be Vanta. Remember, Automate compliance, manage risk and accelerate trust with AI. Go to their website. By the way, when you do that, let them know that you heard about them through the CISO series. Why not? I believe you can actually go to vanta.com CISO it's an easy way to just let them know that you heard about them through us. All right, Mike, any last words?
Mike Johnson
Just a thank you to you, Julie, for joining us. Really appreciate your perspective again, your focus on empowering the business, but also giving really good examples of how to actually do that. But I also really like, I think you called it design thinking, which was a really interesting way of approaching a problem, bringing folks in who have no skill in the area and having them solve their problem and learning from that. So that was a really good tip. So thank you so much for joining us. I enjoyed learning from you. I'm sure our audience did as well.
David Spark
By the way, Julie, will you take critical comments from our audience?
Julie Meyerholtz
I will take critical comments from your audience.
David Spark
Ok.
Julie Meyerholtz
Please bring them on.
David Spark
By the way, Julie, correct me if I'm wrong. You also take compliments?
Julie Meyerholtz
Yes, I also take compliments, yes. Thank you.
David Spark
Let me give the first one, which I did before. You were fantastic. Any last words?
Julie Meyerholtz
Thank you. No, I really appreciate you guys having me. I enjoyed the conversation. Thank you so much. It was a great time.
David Spark
Awesome. Well, thank you very much, Julie. And thank you to our audience, as we say all the time and truly meaning, we greatly appreciate your contributions. Give us more. What's worse scenarios, we greatly appreciate them. And we appreciate you listening to the CISO series podcast.
Host/Announcer
That wraps up another episode. If you haven't subscribed to the podcast, please do. We have lots more shows on our website cisoseries.com Please join us on Fridays for our live shows, Super Cyber Friday, our virtual meetup and cybersecurity hack Headlines Week in review. This show thrives on your input. Go to the participate menu on our site for plenty of ways to get involved, including recording a question or a comment for the show. If you're interested in sponsoring the podcast, contact David Spark directly@Davidisoseries.com thank you for listening to the CISO Series podcast.
Episode: Why Highlight Diversity When We Can Just Hope You Don't Notice?
Date: March 24, 2026
Hosts: David Spark, Mike Johnson
Guest: Julie Meyerholtz (CISO, Brunswick Corporation)
Theme: Discussions, tips, and debates from security practitioners and vendors on improving security collaboration, risk management, real-world security practices, AI impacts, and the value (and challenges) of diversity in the cyber community.
This episode explores the realities facing security leaders: the evolving risks and responsibilities in cloud and AI ecosystems, how to separate effective practices from outdated dogma, and why diversity and open feedback are critical for security maturity. The panel delivers candid commentary on vendor responsibilities vs. enterprise controls, the importance of challenging assumptions, the impact AI is having on risk, and how (and why) to proactively address diversity gaps within the cybersecurity world.
(00:54–05:04)
(05:14–09:30)
(09:36–16:58)
(18:03–22:05)
Scenario 1: Data is leaked after quantum computers break your encryption.
Scenario 2: You face adaptive, AI-powered malware immune to traditional detection.
(22:16–27:19)
(28:38–29:49)
(30:12–35:03)
The conversation is candid, practical, and occasionally playful—with emphasis on problem-solving, humility, and collective learning. The hosts and guest maintain a conversational, welcoming tone, encouraging self-reflection, openness to criticism, and a pragmatic approach to real-world challenges.
Final thoughts:
Julie Meyerholtz sums up by welcoming feedback and emphasizing the value of collaboration, diversity, and adaptability in security leadership. The episode encourages CISOs and practitioners to stay humble, be vigilant in questioning their own processes, and to embrace learning and inclusion as pathways to more resilient, effective security programs.