Loading summary
A
You're listening to the rsa conference podcast, where the world talks security.
B
Hello, listeners. Welcome to this edition of our RSAC podcast series. Thank you for tuning in. I'm Tatiana Sanchez.
C
And I'm Casey Serkis, and we are
B
your RSAC podcast hosts. Casey, what are we going to discuss today?
C
Well, Tatiana, we're jumping into the exciting topic of regulations. Keeping up with laws, regulatory guidance, and standards in all the different jurisdictions relevant for your business already feels overwhelming. So today's RSAC podcast, we are excited to be joined by two members of the RSAC Program committee, John Elliott and Laura Ketzel, who will simplify the tangle by highlighting the most important elements of the global landscape for the cybersecurity community. They will explore how cybersecurity professionals can use the good old CIA triad to think holistically about the regulations evolving and ways to stay ahead of and compliant with those regulatory changes. Are you ready to jump right in?
B
Yes. But before we get started, we do want to remind our listeners that here at RSAC we host podcasts twice a month, and we encourage you to subscribe, rate and review us on your preferred podcast app so that you can be notified when new tracks are posted. And now we would like to ask our guests to formally introduce themselves before we dive in. John, let's start with you.
D
Hey, that's great. Thanks. And thanks for inviting me to the podcast. I guess I'm a specialist in how really one group of people asks another group of people to do cybersecurity things. So in today's contest, that's a law or a regulation, how a law or a regulation aims to affect an organization's behavior. But it's also how organizations communicate what cybersecurity they want to do, how it translates those regulations into policies, and how then it communicates these policies and processes and procedures to people who work there to affect their behavior. And I've been on program committee for quite a few years now.
B
Thank you, John and Laura.
A
Hi, thanks so much for having me. I'm Laura Ketzel and I'm the head of community research at rsac. I've also been on the program committee for, I think since 2013, but I'm very bad at remembering how long it's been. And I've done loads of research in cybersecurity over the years. I used to be at Forrester Research before this, and so my interests are in helping people put all of what we know about cybersecurity regulation, in this case, to work.
B
Great. Well, thank you both for being here. Today, and let's start out broad. So, John, in our prior discussions for this podcast, you had an interesting micro frame for the general direction that data protection and cybersecurity related regulations have been moving. Can you lay that out for us?
D
Yeah, absolutely. Just before I start, I just want to be really clear that I'm going to use the word regulation, but I could mean regulation, I could mean law, I could mean regulatory guidance. And if you look at the history of regulation, it was initially focused on protecting confidentiality of specific types of data, personal data and PII and GDPR and US States, privacy laws, payment card data with the PCI security standards, healthcare related data with HIPAA and financial data in various regulations like GLBA in the state and the Payment Services Directives in Europe. If we look at it from a NIST perspective, from the cybersecurity framework, those regulations or laws talked a lot about the identify prot govern functions. But what I've seen over the last few years is we've seen a shift by regulators and lawmakers from regulations that are focused on the confidentiality of a particular piece of data type to having a focus on availability. So regulators, they typically use the word resilience rather than availability, but in our traditional CIA triad, that's availability. And that includes all the NIST cybersecurity framework functions. That includes detect, respond and most crucially, recover. And good examples of this are the EU's Digital Operational Resilience act, which is called DORA, and the Security of Network and Information Systems, or the NIST directive and NIST 2 directive. And also if you look at banking regulation and financial services regulations, even in the UK there's been a big focus on resilience. And now what I think we're seeing is a final change of regulatory focus. And I don't know if that's because they think, well, we fixed the confidentiality problem, we fixed the resilience problem, but that's a focus on integrity. And we can see that initially in the EU's AI Act. And I'm going to predict that other regulations will concern themselves with integrity. And so there's a direction of travel that we've got this focus on integrity. And if you look at that in the context of like for instance, chained agentic AI, where we've got no humans in the loop, integrity becomes vitally important. Important because if we have non integrity of data, either in training data or in rag data, or anything that can poison the context window that an agent's making use of and that change into another agent to Another agent to another agent. We can see there'll be significant problems. This is something that I know Bruce Schneier is going to talk about in his keynote at conference on the Wednesday. And then the final thing is that moving from integrity, there's another trend I can see, which is a lot of regulations, regulated organizations, what an organization does. So does your organization do this? I think we're seeing a trend into regulating products rather than organizations. So the EU A Act and the EU Cyber Resilience Act. Well, certainly Cyber Resilience act takes a massive product focus, but the EU AI act does say what organizations that are doing high risk AI things should do. But they also talk about a product framework and a product certification. Those are the two trends. So a move to integrity and a move to verified products, not verified organizations.
A
That's actually a really good comprehensive frame for looking at things, John. And I think that'll help everybody sort through the giant piles of regulation from all over the world, not just from here in Europe, where both you and I live. But the only thing I'd add is that even though it's got resiliency in the name for the Cyber Resilience act, that is, that's actually a piece of integrity regulation in a lot of ways, it seems to me, because what it's aiming to do is govern the security and ability and recoverability and so on and so forth of all kinds of digital products. And so it sort of shifts the kind of balance of responsibility onto the manufacturer of those products. Think of first, in the sense that a lot of the previous regulation has really been geared towards the operation of various kinds of systems. And with the cra you see the EU promulgating a set of rules that say if you produce this type of product, these are the standards that you must implement. And if you don't implement them, then you will be assessed fines and rolled out of compliance. So it's a really interesting shift. Now, obviously that I always get this wrong, so John can correct me if I've got the dates and the applicability wrong. But basically the date you have to worry about for the CRA is in 2027, even though it's come into force already, they start actually enforcing the rules in a way that's relevant to manufacturers in 2027. John, I got that right, I think, yeah.
D
Well, whether they actually enforce or not enforce is a different nature of are they allowed to enforce?
A
Right, but in principle they are allowed to enforce.
D
Yeah, yeah, absolutely. But as with any new regulation, I wouldn't expect enforcement action to take place straight away. And any enforcement action would typically be consultative enforcement action. In other words, we write you a letter saying, do you know, you're probably not doing this right yet? You should probably start doing this right. I would hate to take enforcement action against you.
C
You know, listening to your point about, you know, the umbrella of regulation and the term that you're using and what falls under that. And then there's, you know, this enforced, not enforced. So it just feels like there is this giant pile of EU legislation and global legislation that cybersecurity professionals have to pay attention to. And there are various aspects of these regulations, depending on where you are in the world and where you do business, that can get really complicated. So I'd love for each of you to sort of help our listeners understand what are your recommendations and any general approaches that will help practitioners and security professionals not drown amidst all of this regulation. Sure.
A
Well, I think the first thing to do is take a deep breath. Then after that, your previous efforts to comply with regulations will help you here. Right. Because there is an absolutely giant pile of just EU regulation, nevermind everywhere else. And so once you take into account all the places where you might operate, and like John, I'm using regulation here to encompass industry standards and laws and regulatory guidance from various agencies and so on, because you have to kind of consider it as a whole. So one of the things that can really help is everybody pretty much who operates at all in Europe has had to do a whole bunch of things to comply with the gdpr. And if you did that right, you implemented a whole load of privacy by design principles and did a lot of thinking about that very first kind of confidentiality oriented segment of the regulatory landscape that John was talking about at the beginning. And the good news is you can use all of that work that you have done previously in all of your future compliance endeavors. Because as you've implemented privacy by design, as you thought about designing things securely from the ground up, all of that work should help you comply with all of the rest of the regulations that you might be subject to in any other jurisdiction. Because the EU's regulations were really the first very large scale ones that were implemented. And so a lot of the other regulation, whether it's guidance or a standard in a lot of the rest of the world builds on that sort of framework. And so all of those things that you did should really help you comply with things in the future. That's not to say that there isn't new work to do and that in sectors where you haven't perhaps previously done as much work. You might not still have considerable amounts of work to do. But the NISTOO Directive is the second directive of that type and yes, it has more requirements, but you were probably subject to the previous Directive or even if you weren't, you at least thought about how would I comply with the previous Directive. And so all of what you've done previously should help you comply with all of the future regulatory obligations.
D
That's a really good point because be very clear that European law is very much focused on risk. It's not black and white. And so we do a data protection impact assessment or a privacy impact assessment to work out the degree to which processing somebody's personal data may affect their fundamental rights and freedoms. Right. And the same is very true of the AI Act. The AI act is really based around how does any of this high risk AI stuff potentially affect the fundamental rights and freedoms of a human? And so that the processes you have. And I think even. Didn't we do a podcast on this a couple of weeks ago or is the one in planning with VAL Alliance?
A
We did a privacy kind of regulation around the world webcast a couple of months ago. That's actually good kind of complement to this one because there we talk about a lot of the data protection regulation in detail.
C
John just does podcasts with us so often he.
D
Yeah, but. But the whole thing of working out how will what I am doing affect people's fundamental rights and freedoms is a really important step. And if you're doing that for data protection, that's going to help your AI governance. The interesting thing from a pure cybersecurity perspective is to look at the cybersecurity bit of the AI Act. It's actually an incredibly small part of the AI act and it says that systems should be resilient against attempts by unauthorized third parties to alter their use outputs or performance by exploiting system vulnerabilities. So that's good. We can't fix system vulnerabilities, but now we think we can legislate them away, which I think is a really positive thing. And then it says you do appropriate technical solutions which could include, and I love the way they write this, where appropriate measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate training data, pre trained components used in training inputs designed to cause the AI model to make a mistake, adversarial examples or model evasion, confidentiality attacks or model flaws. And if you think from a cybersecurity perspective, gosh, I'm not Actually, sure how we do that at the moment. Model poisoning and prompt injection is something that we, as an industry, I think we're probably 5% of the way into understanding the best ways of doing that at the moment. And so to go back to your question, Casey, what do we need to pay attention to? I'd pay attention to the fact that legislators think if they write something like this, it will magically happen. We know it won't magically happen. So the next question is, you turn it on its head and if a regulator comes to you and says, how are you doing this? You need to have a great story to tell about how you think you're doing this, which for me means lots of logging. It means really understanding when you're ingesting via rag system or you're ingesting prompts or whatever, you're sticking into the context window of making sure that you log what you're putting in and then logging the output. And then when we get to agentic systems, is to be able to have that step by step by step, especially if we're chaining agentic systems or chaining actions that agentic systems do, of making sure we're recording what output we've got at the time. And probably the tools are not there yet to do that, but then monitoring that, to look for things that look, well, different or weird or unexpected. But I think the important thing, Casey, is to think, if a regulator came, how would we evidence this?
B
And John, you know you've mentioned the EU AI Act, Dora, what about other AI regulations beyond the eu? Are we seeing similar approaches in other countries or different ones?
D
We are seeing similar approaches in other countries. One country's notoriously not having any AI regulation, but that's fine as long as you only ever stay in that country. Singapore. I think Laura's more familiar with this one. Singapore is in some really good stuff, aren't they, Laura?
A
They are. Singapore has actually done a ton of kind of AI guidance over the last couple of years. And what's interesting about the Singaporean approach is that it's all kind of sectoral guidance rather than enforceable law. But that's been their approach on kind of digital regulation and other kinds of regulation, broadly speaking, and it clearly seems to work for them, interestingly, and I didn't know this, so I'm going to tell everybody, something I just learned not long ago, is that the Singaporean regulators have actually participated in a bunch of the kind of EU regulatory advisory committee stuff. So it's easy to think about regulation as this like global patchwork that doesn't overlap or talk to each other. But in fact a lot of the kind of more forward thinking global regulators actually work together which I think is really useful for all of us have to figure out how to to do this kind of worldwide and globally. So they've been working on AI regulation and governance of various kinds in Singapore since something like 2020 I think. And you can look into kind of, there's some specifics for the use of AI in kind of risk management in the financial system from the Singaporean authorities. And there's a bunch of model governance frameworks that they've issued which they've updated fairly recently as well. And they've even got some guidance on agentic AI already which is pretty fast given that that came out of nowhere over the course of the past 12 months, as we all know from reviewing submissions for RCC conference, because of course there was virtually nothing on anything like Agentic AI for the 2025 conference and for the 2026 conference we had understandably an absolute giant bolus of agentic AI submissions for the conference just reflecting kind of fast evolution.
C
But the fact that they have some regulation already is incredible because usually, you know, it's a slow slog for legislation to catch up.
A
Yeah, the sort of usual saw about this is that, you know, regulation and legislation sort of runs after technological development by several years. Right. And one of the few places in the world that isn't true is Singapore. So this is not. They sort of have form in this area because their kind of regulators tend to stay very kind of far ahead of things and be willing to issue guidance quite quickly. I mean, obviously it helps to be a sort of small city state that sort of runs in a very efficient way to do this. But nonetheless it's always a good place to look for guidance on leading edge technology stuff because they tend to have gotten it out. The other jurisdiction that's worth paying attention to in the AI regulation space at the moment is Brazil because they have a draft AI law that's sort of been waiting to be approved and it'll be really interesting to see if they actually approve it over the course of the first half of this year. Because for those who don't know, Brazil's got elections in October. And so conventional wisdom would be that if they don't approve the law before the end of June, it's not happening until 2027 at least because like in other countries, once the sort of prime election season starts, not that much normal legislative business happens. So it'll be interesting to see if Brazil passes such a law because they've been relatively kind of fast followers, I guess we could say, on the kind of regulation front, like the General Data Protection Law in Brazil is quite similar to the EU gdpr and that's deliberate. So Brazil tries to follow the EU regulators in the way that they approach different kinds of regulation. And the AI on Brazil should kind of look similar in a lot of ways to the EUAI Act, I would think, when it finally passes.
D
Yeah. And Casey, I think, you know, one of the important points to make with regulation and laws is that they tend to be technology neutral because they are very aware nowadays that writing things that are not technology neutral is a bad thing. So if you look at most European law, and in fact I quote again from the AI act, it uses the word appropriate a lot. So, you know, technical solutions should be appropriate to the circumstances and risks. Vulnerabilities that will be addressed shall include where appropriate. Okay, so they use this word appropriate a lot in America. You would tend to use the word reasonable rather than appropriate from your case law background. And so appropriate means what's appropriate for the time you are doing something. And so if I was doing something thinking about the security of AI systems in my organization, not specifically governance, but thinking from a cybersecurity perspective, I would look at what's out there now that we would consider appropriate standards, standards to secure AI systems. So I would go and look at the OWASP Gen AI security projects and look at their top 10. I would look at the guidance that's come from NIST. And so although the legislation deliberately tries to be general and uses the word appropriate and reasonable, it's then up to you to say, okay, what is appropriate and reasonable in 2026 if I am doing something. And so for me, I would definitely look at the OWASP stuff.
A
Stuff.
D
And I believe there's a great OWASP seminar at conference this year.
C
Indeed there is.
A
Just to add one thing to make sure that's clear. The reason that you'll see reasonable and appropriate and so on in these standards and regulations and laws is precisely because the regulators and legislators are very well aware that they are unlikely to be able to predict what the best way of doing something is five years into the future. And so that's why they do it that way. And it's infinitely preferable to a bunch of requirements that say you must do this specific thing that then stay on the books for 10 years and you're stuck doing something sort of obsolete because the law requires you to.
C
Speaking of five years into the future, I know that many nations and regional authorities are requiring, perhaps some are just recommending, but some are requiring a transition to post quantum encryption by 2030 or 2035, with interim milestones for cryptographic inventory planning and starting the transitions in 2026 through 2028. So what should everyone be doing now?
D
That's a really good question. First of all, I'm actually going to say almost exactly what you said there. First of all, you need to have an inventory to know where you're using cryptography in your organization. And that can be both for data at rest and data in transit. What we're looking for is cryptography that will be significantly weakened once there are quantum computers that can weaken the algorithms or key sizes you're using. And we have a great session actually at RSIC conference this year on PQ cryptography. We had a lot of submissions. You've got a really good session. So I won't get into the mass details. But the reason for doing an inventory is to say what needs to change. If it's symmetric cryptography, you need to move to bigger key lengths. And if any of you are using old algorithms like DES or triple des, you need to move over to AES. That's a pretty easy technical thing to do. The problem comes where we're using asymmetric algorithms and techniques. We'll need to change that. And that's not a plug and play change in the way that changing a symmetric algorithm is a plug and play change. So it might take you a while to do it. So number one is inventory, number two is work out where your weakness is. Number three then is work out. This is the important thing, Casey and Titania, is to work out how long it will take you to reprogram and redo something, which of course may also mean working with partners at the other end of that asymmetric cryptographic communication channel to work out that they need to change as well. And what you need to make sure is you can do that before you can accomplish the change, before the quantum computer exists that somebody wants to use. That will give a significant return on investment for breaking your cryptography. So it's also not a blanket thing. You know, we're not going to go to AWS and everyone's going to be able to cost effectively rent time on a quantum computer to break the cryptography using your organization. It's going to be expensive initially and therefore you need to have something that's worth stealing. The other thing to think about is not just the date by which the quantum computers are available, but also you need to think about what's called a capture now, decrypt later attack. So is any of your data still going to be worthwhile to an attacker in, I don't know what the magic figure is? 6 years, 7 years? In other words, it's worth them capturing your data now so that in six or seven years, when the quantum ability comes to that organization who's captured your data, they can then break your cryptography and gain value based on storing the data for all that time and breaking it. They can still get commercial or intelligence value for decrypting that data. And if that's the case, you probably need to be moving quicker than people where that doesn't apply. So the important thing though is to understand the value of the data to somebody else and the effect it would have on you if it was decrypted in the future. But the hardest thing that we found, and what I found working with organizations, is, you know, that there's a matrix, isn't there, of things that are important and urgent and not important, but urgent and not important and not urgent. And this, which is important but not urgent. And we tend to be really bad at important but not urgent things. And so my biggest tip is to focus on this, even though it is important but not urgent.
A
I think that's exactly right. And the thing to add here is that no one really knows exactly when the ability to decrypt the kinds of messages that John was talking about there will actually be something like generally available. The estimates range really widely from sometime in the next eight years to, I don't know, maybe 40 years from now or maybe never. So that sort of gives you more, shall we say, fodder for the important but not urgent problem that John was talking about there, which is, you know, it's important, but like, who knows when it's going to happen? So it's easy to put off in favor of kind of more concretely urgent things. I think we would all say don't do that because the nearer in estimates could be right, particularly if you've got data that would hold its value over time. And so somebody can kind of capture it now and it's worth bothering storing so that they can then decrypt it when they're ready later, whether that's in 7 years, 8 years, 12 years, 22 years, or whatever it is, you would want to be moving faster. And even if you don't have data of that type, you don't Want to have to do this under the gun, as it were. And so getting your inventory done, doing all your preparation, this goes under the head of what's usually called cryptographic agility. So being ready to move so that you can accelerate your move to post quantum algorithms in all the places where you would need to. If all of a sudden it looks like for whatever breakthrough reason that we can't see now, a quantum decryption of more stuff is going to be more financially viable more quickly than some of the more pessimistic estimates say.
B
And I appreciate you both emphasizing that although it's not urgent, it's still important even though we don't have an exact timeframe. It could be 2030 or 2035. But I had an interview with a subject matter expert and they were saying that inventory itself can take three years just to collect what you guys have, especially in the healthcare when they do have to have data stored for I think 10 years or more. So thank you both. And I want to bring it back to the beginning of our conversation. If we assume that John is right about future data protection and other cybersecurity related regulation, focusing more on the integrity element of CIA, how does each of you see that evolving over the next few years? John, do you want to start with you?
D
Yeah, sure. I'm going to come back to that word appropriate or reasonable. Laws and regulations are going to say, as Laura says, laws don't like doing things technically because they know they become out of doubt. So it'll be, you know, think of the word. Are we taking appropriate measures to ensure integrity? And if somebody came to us and said how are you ensuring integrity? How would you prove or show or demonstrate that to someone? And so there's two things that I would look at. The first is to look at emerging standards. So owasp, there's a MITRE framework for AI at the moment. Look at things like that. Come to RSAC conference and attend sessions to hear what people are saying about AI. That and then I would say think really carefully about that verification. How do you verify that you took the appropriate measures? And I think that comes down to a lot of logging and data retention of what did you use to train? How did you make sure that the data going into a model was okay and not being able to tampered with? How did you verify that? So not only did you, you have protection in place to ensure the integrity, but then how did you assure that the output was what you expected it was. How did you quantify the quality of the output that it was integral. There will be, I am sure, some magic cryptographic hierarchy that will help us assure the integrity of things. So, you know, at the moment we're all speaking to each other, we don't have our cameras on. I could be an AI saying these words, couldn't I? I could be a deep fake voice clone. So how do you know that it's me and not someone pretending to be me? At the moment we don't have a cryptographic way of doing that, but in the future we will have cryptographic ways of doing that, I hope with some sort of magic cryptographic hierarchy. So we probably need to keep a focus on that as well.
A
So John has given me a very useful jumping off place to be something of a tinfoil hat security person and to say that integrity can sound kind of abstract when you're talking about it, divorced from very concrete things like integrity of particular pieces of data. But one of the kind of top level reasons why it matters is because like John said, right now we have our cameras off. So John could be a robot and we wouldn't know in theory. But the sort of broader point there is that if regulation and cryptographic implementation like John was talking about of integrity isn't successful, then you'll continue to see the kind of institutional and counterparty trust collapse that everybody worries about. This is sort of the concern with the flood of absolutely perfect deepfakes that no one can tell aren't real. You know, it used to be that you could tell when you were doing the real or fake, you know, AI image quiz type stuff, that you could relatively easily tell which things were fake because the people had six fingers or, you know, the reflection in the mirror looked wrong or whatever. And already with the best, you know, AI image generation, it is almost impossible to tell. Casey and Tatiana and I got a good test of this because one of our coworkers developed yet one of these things using frontier models and we all, and even our children scored super badly on trying to pick out the AI image. So when you think about sort of regulation, as much as it's not everybody's favourite topic, except arguably, I suppose, for John's and mine, the reason why it matters is because particularly successful integrity focused regulation has the sort of potential to make those deficits of trust at least get less bad, less quickly, if not improve. And so more people will be able to be more confident that systems are not designed in ways that would cause them harm and that the digital products that they buy actually live up to the promises that they make because One of the very large macro problems that we have as an industry because we work in cybersecurity and data protection and so on, and that the world has, is that too much of this onus has shifted on to the individual person, that it's your own job to make sure that you are not scammed and that you don't fall for the fake. And at the end of the day, what a lot of the regulation in a lot of these jurisdictions is designed to do is to shift some of that onus back onto the people who produce the products and to the systems themselves, rather than onto the individual humans who somehow have to be their own cybersecurity people and their own data protection people and implement their own counterparty trust and financial transactions. The number of people who can actually do all those things successfully, even in our community, is very, very small. And the number of people who actually have time to do all of those things is infinitesimal. So if you want to sort of be sort of less unhappy about regulation, shall we say that's a better way to think about it, that it's a way of shifting the onus back onto the entities that have more resources to actually mitigate some of these problems.
C
This has really been a very insightful conversation and I know that you would love to continue talking for hours, but we do have to start wrapping up. I do want to let our listeners know that if they're interested, interested in seeing Lara, your and John's recommendations about which sessions at RSAC they should attend, they can visit the cybersecurity discussion in the RSAC membership. And might I even understand correctly that you are potentially going to be diving deeper into topics that are of import in some of these discussions. Laura, is that true?
A
So John and I have made a list of some of the sessions that will be taking place at the conference this year and John mentioned a few of them as we were going along here for extra credit that address some of these topics and because as I said, okay, I might get the dates wrong, everyone gets the dates of enforcement of various things with EU regulation in particular wrong. I made a slide with all of the enforcement dates of the various acts and some of the the largest potential fines and whatever. So I'll post that in the cybersecurity discussion community too. If I've got anything wrong on the slide, please tell me. But I think they should be right.
C
That's fantastic. I love it. John and Laura, thank you so much for joining us today. And listeners, thank you for tuning in. Please keep the conversation going in our RSAC membership platform by visiting onersac.commembership and be sure to check out onersac.com for new content posted year round. Finally, don't forget to Register for RSAC 2026 conference by visiting RSACconference.com USA if you need to hit rewind, I'll remind you that John has dropped a few Easter eggs in this podcast, suggesting sessions worth viewing and attending. I mean, they're all worth it, right John?
D
Absolutely. Every session has been carefully curated by the program committee. We get, I think we got over 100 for the track that Laura and I had and you only allowed us to pick 10, which I thought was, you know, terrible. We're terrible because we could have picked 2025 quite happily of really good sessions. So yeah, there's some great Sessions at RSAC 2026 conference. I'm massively looking forward to it. I'm massively looking forward to the OWASP seminar on Wednesday morning and also Bruce Schneier's presentation on how he thinks the world needs to focus on integrity. More.
B
More.
C
The whole thing is going to be great, so you must be there. Thank you all so much. Until next time,
Episode: What Cybersecurity Professionals Need to Know about Legal and Regulatory Developments: A World Tour
Date: February 25, 2026
Hosts: Tatiana Sanchez (B), Casey Serkis (C)
Guests: John Elliott (D), Laura Ketzel (A)
Theme: Navigating the evolving landscape of cybersecurity law, regulation, and standards globally—with special emphasis on trends, preparedness, and the practical implications for professionals.
This episode offers a “world tour” of the current and emerging legal and regulatory environment for cybersecurity professionals. Hosts Tatiana and Casey are joined by regulatory experts John Elliott and Laura Ketzel, who break down key trends in global regulation, the evolving focus of laws (from confidentiality to integrity), and strategies for staying compliant and resilient in a rapidly changing landscape. The discussion includes actionable advice, global perspectives (EU, Singapore, Brazil), and special attention to the coming wave of integrity-focused regulation and post-quantum cryptography transitions.
John Elliott introduces the "micro frame" of regulatory focus:
Early regulation targeted confidentiality—personal data (GDPR, HIPAA, PCI), primarily via the classic CIA triad's “C.”
“Now what I think we're seeing is a final change of regulatory focus… on integrity. And we can see that initially in the EU's AI Act. And I'm going to predict that other regulations will concern themselves with integrity.”
—John Elliott, [05:19]
Key prediction: Movement from regulating organizations to focusing on certified, verified products (as embodied by EU Cyber Resilience Act and AI Act).
“With the CRA, you see the EU promulgating a set of rules that say if you produce this type of product, these are the standards that you must implement. And if you don't... you will be assessed fines and rolled out of compliance.”
—Laura Ketzel, [06:54]
Practical advice to avoid being overwhelmed:
“All of those things that you did should really help you comply with things in the future.”
—Laura Ketzel, [10:58]
John Elliott's emphasis on risk-based, process-oriented compliance:
“The whole thing of working out how will what I am doing affect people's fundamental rights and freedoms is a really important step.”
—John Elliott, [12:41]
“Model poisoning and prompt injection is something that we, as an industry… we're probably 5% of the way into understanding the best ways of doing that at the moment.”
—John Elliott, [14:30]
“Their kind of regulators tend to stay very far ahead of things and be willing to issue guidance quite quickly… a good place to look for guidance on leading edge technology stuff…”
—Laura Ketzel, [18:08]
“And so appropriate means what's appropriate for the time you are doing something… I would definitely look at the OWASP stuff.”
—John Elliott, [20:31]
“The hardest thing that we found… is… things that are important but not urgent. And we tend to be really bad at important but not urgent things.”
—John Elliott, [25:10]
“You would want to be moving faster. And even if you don’t have data of that type, you don’t want to have to do this under the gun, as it were.”
—Laura Ketzel, [27:05]
Evaluating integrity in systems and regulation:
“How do you verify that you took the appropriate measures? …comes down to a lot of logging and data retention…”
—John Elliott, [29:07]
Impact on trust and individual burden:
“The reason why it [integrity-focused regulation] matters is because… successful integrity-focused regulation has the sort of potential to make those deficits of trust at least get less bad, less quickly, if not improve.”
—Laura Ketzel, [32:14]
John Elliott:
“We can't fix system vulnerabilities, but now we think we can legislate them away, which I think is a really positive thing.” [14:00]
"Are we taking appropriate measures to ensure integrity? And if somebody came to us and said how are you ensuring integrity?” [28:16]
Laura Ketzel:
“All of those things that you did should really help you comply with things in the future.” [10:58]
“If regulation and cryptographic implementation… isn’t successful, then you’ll continue to see the kind of institutional and counterparty trust collapse that everybody worries about.” [32:10]
“The number of people who can actually do all those things successfully, even in our community, is very, very small.” [33:22]
On regulatory philosophy:
“It's infinitely preferable to a bunch of requirements that say you must do this specific thing that then stay on the books for 10 years and you're stuck doing something sort of obsolete because the law requires you to.”
—Laura Ketzel, [21:15]
| Time | Segment / Topic | |-------|------------------------------------------------------| | 03:04 | History and direction of cybersecurity regulation (Elliott) | | 06:30 | EU Cyber Resilience Act and shift to product responsibility (Ketzel) | | 09:31 | Strategies for handling regulatory complexity | | 12:41 | Risk-based impact assessments and the AI Act | | 14:00 | AI security requirements and practical gaps | | 15:35 | Global regulation: Singapore, Brazil | | 19:37 | "Appropriate"/"Reasonable" legal language | | 21:48 | Preparing for post-quantum cryptography transitions | | 25:10 | The "important but not urgent" challenge | | 28:11 | Future of integrity-focused regulation | | 32:10 | Integrity, trust, and shifting responsibility |
The episode underscores that while the regulatory landscape is evolving and complex—with focus shifting toward product integrity and resilience—the fundamental tools and approaches of good security and risk management remain powerful. The ability to pivot and adapt, leverage established frameworks, and anticipate future requirements will continue to be vital for cybersecurity professionals worldwide.