Loading summary
A
You're listening to the Cyberwire Network, powered by N2K.
B
I'm David Moulton and this is Threat Vector.
A
I think of security as a trade off, right? You're trying to improve security, you have to give up on something. What you trade could be user experience, it could be resources, it could be the velocity that you're giving up. So there's always a trade off, you know, if you're trying to improve security in every aspect. And obviously if you ask people 20 years, 30 years back, right, would you want the password on your laptop? People may say no, I probably wouldn't want their password, right. Let alone having like complex passwords that you have to change constantly. And then on top of that, mfa hardware, software, you know, whatever, right. So if you ask the users or any it, you know, they want to put the minimum controls so that the user experience does not get impacted. Right. But you got to be able to balance it.
B
Today I'm speaking with Birat Nirola, head of security for Google Enterprise Networks. Bharat is a friend of the show and is sharing his views today At Google. He leads security across on prem network infrastructure, enterprise and cloud environments. But this conversation isn't about Google at scale for scale's sake. It's about something more, something fundamental. How do you build security that people actually want to use? How do you make security so good that it enables business velocity instead of blocking it? Today we're talking about UX as a security control, practical zero trust implementation and what it takes to lead enterprise network security. Security when security that slows people down is security that gets bypassed. Birat, welcome to threatvector. Really great to have you here today.
A
Yep, thank you for the opportunity.
B
You've had a really interesting journey. When I went through your LinkedIn and we talked before the show, you've gone from building SOC operations to leading cloud infrastructure at financial services firms and now you're heading security at Google. Walk me through how you got into security and what drew you into this field.
A
Yeah, so when I was doing my undergrad, I took a few security courses. My undergrad was in information technology. And you had options to take security courses. It was quite fascinating to see you need to build a lot of understanding of the technologies before you could secure them. So that's when I started getting interested in security and luckily my first job at a startup and even prior, I guess in terms of internship, I got to work in the soc and my first job was at a startup where you could pretty much do everything since it was a small company. So I ended up building a lot of security practices from scratch, including building the SOC team. I think it was a very good opportunity for me to learn from all of that and setting up enterprise security, data center security, cloud security, and build all the foundation as an engineer from scratch. So that's sort of helped me in my future roles as I grew and started leading other functions in security.
B
So let's go back in time to when you were@alarm.com. when you were there, you established their perimeter, their enterprise, their infrastructure and IoT security programs. I'm curious, building security from the ground up, early on, what did that teach you that still shapes your approach and your view of security today?
A
Yeah, so I mean, first of all, when I first started, it was quite overwhelming because my first project that I owned was building IBS intrusion detection system across multiple data centers and there's no instructions. This is 2010 I'm talking about. I pretty much got a book and said, hey, we need to implement IDS in our data centers because we manage IoTs and we get tons of signals from everywhere and we can't tell what's ATT and CK and what's not. So after doing that, which probably took more than two months, then I started realizing there was a lot of research involved in it. Right. Because you actually had to understand the protocols, how every application worked, what, what's an attack, what's not, build the rules, tune the rules for the intrusion detection system. So that taught me a lot and also gave the confidence that, you know, I could pretty much manage anything. Right.
B
When you were working on some of these ground up systems or even today at Google, are there lessons learned and, or lessons reinforced where security UX is not just a design issue but you realize it's a risk issue?
A
Yes. I mean, you learned that over time, I believe. Right. And it took me quite a few years to actually understand all of it. I think by the time I got to CenturyLink is when I really started understanding it more deeply because I was building products for customers. And that's when you start getting a lot of what are customers saying? What do they actually care about? They probably don't care about every single vulnerability, they probably don't know about every single risk that the organization's facing. Right. They just care about the important things and there's a lot of bypass that can happen if you make security too strict. But in my early career, I definitely feel I was a security engineer that wanted to do the right thing and wanted to secure the infrastructure no matter what. Not putting A lot of thought into what could be the overall impact to the company, to the velocity of the products and everything like that is strictly focused on doing the right thing and securing the organization. And over time, we obviously evolved, we put a lot of thought into thinking about the end to end impact of something literally starting something new.
B
When we were talking about the show, you were explaining how a security design for the security UX could add a few seconds or, you know, half a minute to a process. But at the scale that you're looking at today, that could be days and days worth of productivity time where you're waiting on an MFA code to come through. You know those sorts of examples. And I wonder, when we talk about security ux, you know, what does it mean for infrastructure and cloud teams? How do you think about balancing risk mitigation, you know, those control layers with that user experience that you're rolling out to internal users, some with elevated privileges, to guests or to vendors and partners, all of which you face today with your role at Google.
A
I'll provide a genetic answer first, right. I think of security as a trade off, right? You're trying to improve security, you have to give up on something. What you trade could be user experience, it could be resources, it could be the velocity that you're giving up. So there's always a trade off if you're trying to improve security in every aspect. And obviously if you ask people 20 years, 30 years back, would you want the password on your laptop? People may say no, I probably wouldn't want their password, right? Let alone having complex passwords that you have to change constantly. And then on top of that, MFA hardware, software, you know, whatever, right? So if you ask the users or the admins or they'd say, or any it, they would say, you know, they'd want to put the minimum controls so that the user experience does not get impacted, right? But you gotta be able to balance it. Like you can't just not have a password on the laptop. And we've come through that portal, passed that portal or already many years back, everyone, all of us, if you think about 10 years back, we probably didn't have 10, 15 years back, we didn't have password on our phones. Now we've got various types of passwords. So users have actually adopted a lot of this and they understand why it's important and they don't disable it. Everyone's using it now. So it is a journey for all of us. But as we look at how do we scale security, how do we make sure people Actually understand the trade off, why something's not required. There's a lot of user education that has to go behind the scenes and also gathering early feedback from a lot of people on how is this going to change the user impact. Because a lot of times security folks do not understand what the end users go through. Because you have a different perspective, right? You're like, yes, I understand this is a little bit extra work, but as you pointed out earlier, it's extra work for a company of over like 400,000 users, for example.
B
I want to switch gears for a second every time I talk to anyone who's working in DevOps environments in the cloud. The number one thing that they're concerned with is speed. Anything that slows things down tends to be seen as a problem. And I'm wondering where your security teams most often have to slow down your DevOps or cloud teams and where you think it's really unnecessary and you can just let those teams run unbridled.
A
So generally most companies, especially now, right, are looking at how to improve velocity across the board, how to use AI to scale, how to do better, using technology to solve the ongoing problems. And on the flip side, you know, if I put on the hat of his, you know, like someone who leads security, then you see like the risks that we are creating for the future.
B
Okay.
A
Because you know, I've been through this journey in the past, right? We like every company started adopting cloud without understanding how cloud works. Every company was on board with DevOps model instead of the systems engineering and different teams doing different things, right? DevOps, you build it, you own it model. Bunch of companies actually adopted that. And the team that built it and owned it probably didn't have good understanding of security or operation stability and then they ended up owning it, right? So it was a lot of learning we went through as an industry in that shifting from that like older way of operating to the new one. And what you see over time is you need to really understand what is the risk appetite for the organization, like what's getting exposed, right? And really understand what do they care about. I'll give you an example, right? I cannot go to a brand new startup and then try applying all the security controls that we apply at Google and say everything is need required, right? Google's doing Google, right? So Google's doing it for a reason. It is necessary because at the scale Google operates as type of attacks that Google gets, you know, across its product areas and everything, right? It's enormous, right? And the amount of state sponsored, you know, just Think about it. Any, any type of attackers across the globe, right? Google's a primary target. And I can't go to a startup that no one knows about, for example, or very few people know about and say you got to apply all these controls because it's going to make your business secure. It just doesn't work. And it's absolutely.
B
The risk model is completely different. You know what Google's facing as a target and the impacts of those attacks are different. I'm curious, can you talk about the difference between friction that protects and friction that creates risk?
A
Sure. I think it's a very interesting question. So let's start off by friction that creates risk, right? When it creates risks in this or when you create additional protections and add hurdles for a system user or end user, they would try bypassing it because it impacts velocity. For example, product teams essentially just want to launch their product and then debug, fix their issues asap so that they can launch the product fast. And if you have them go through multiple hops of pushing the code to dev and then to validate and test and then in production, just to validate whether the same change they made earlier is going to cause problem or not, that delays. Right. It takes time to do all of that. So then we've seen examples of multiple teams trying to inject a backdoor so they can directly access the production servers or infrastructure so that they can directly troubleshoot, for example, especially you see that in case of ML workloads where they want to change the model, tweak the model really fast in production. So then how do you do that? Right. That actually, that friction actually creates the risk of them bypassing and also provide an avenue for attackers to just utilize the same backdoors to attack the infrastructure. But on the flip side, the friction that protects against the risk are the ones such as the examples we shared earlier. We have a password. We use mfa. It makes it harder to. And the refresh tokens, like the refresh, the access tokens expire after a certain time. And you got to be able to refresh it. So it narrows the window for an attacker. Same way going through a production server through a hop or a jump host, it's an extra step, but it makes it really difficult for someone to not be able to go to the production servers directly or use the credentials that they, they've added. So that, that sort of adds a lot of, you know, protects against a lot of risk that is necessary. If you don't have that, you just, you're just making too Easy for the attackers to just bypass.
B
So, as I'm listening with my designer hat on, some of the things that you were talking about where you've created friction that legitimate users try to get around and therefore attackers can, might be where you've not been thoughtful about what the impact is. And then on the flip side, where the protections really do help out, it's where there's been enough at bats and enough study of how people are interacting with that control or their exposure to it. So they're used to it, but it comes down to refinement and tuning and not just blanket. Here's a policy, here's a control. This is exactly how it's going to work. And I suspect that if you were to think about this or challenge me on this idea, as you roll out security, it has to change over time, but so does the user experience of that security. It's not a fire and forget and you never come back to it. Sounds to me like you're going to come back and look at it over time, make sure that it's the right amount of control and the right amount of friction to match the risks that you're facing as things change.
A
Right, Right. So I think we. Think we. You know, personally, I've done a few things to be able to manage this. Right. And sometimes I think you talked about this earlier. Right. How do you improve velocity? How do you allow folks to move faster? And we've done that. Like we've done that several times. But we go back and ensure that there are exceptions that are being looked at on a regular basis. So then someone wants to move forward. In a dev infrastructure, we'd probably give them full access in dev because it doesn't expose any sensitive data or any production data for three months. Let's say that access expires in three months. That exception has to be revisited again, as in, is that required? Do they need the same access in production? Have they sorted out everything? So then that is still helping velocity, like improving velocity and letting the teams build what they do best and not focus, not be slowed down by security. But then when you revisit it, one of your responsibilities is also work with the team to figure out. Sometimes we come up with very creative. You know, you'd be surprised what type of creative solutions we'd come up with at times to help the end user. Rather than just using a sledgehammer approach, as in, this is the only way we could secure it. In a way, try being creative around applying compensating controls so that that infrastructure when going to production still remains secure. Right. So it is something we have to look at constantly. And the other thing we look at is across the board. And this is, you know, my team spends some time at this as well. Right. Across the board we look at what's in use right now, like actively in terms of access and whether that's network access, user access and so forth. What's required, what's not required at this point, what's the usage like? Right. And sometimes we just remove the usage that hasn't been there for a long time, for example. Right. So that's how we look at things at scale, to figure out what's absolutely needed to ensure it's secure. And second part, we don't leave backdoors, I suppose. Right, right. And third, if the application has morphed or is doing something weird, then we are also able to flag it, as in, this is new, like this never existed. And sometimes it's like you're requesting this, but we never saw this in the architecture or the design. What has changed? Like why do you want to connect to an external vendor that's not even approved, for example? Right. So we look at it from multiple lenses.
B
Piera, I want to go back to something you said a moment ago. Uh, we're, we're building the future without understanding the risks. I'm curious, you know, you were talking about going into a world where everyone moved to cloud, right. Are you seeing that happening today? And if so, where do you see that specific problem happening now?
A
So there, there are a few areas. Most companies are in the cloud journey now. Right. So most are actually using cloud to a certain extent. I don't think most companies actually understand what type of use cases they have and then being able to balance that. Right. So the first part, and this is a classic problem you see, right. End users don't know what security requirements you have, you don't know what the end users actually are or the application teams are actually building. So there's no understanding, there's lack of understanding between what, what the end user is doing versus what are the policies. Right. And it's both ways. And that's, that's a, you know, that's. And the same things currently the trend I'm seeing is on AI, you don't know what type of models are being brought in, what type of data is being used, what type protections we have against the data, what type of infrastructure is being used to secure that data. And we are just building models and like we are using bytecode, for example, in Some cases. Right. And you don't know what the. And most people, if the code works, why would they actually go and look at is, are every software libraries that the code produced or every part of the code important for you or not? Is the model sanitized, the model you're importing to do something like facial recognition recognition, for example. Right. Did you pull the. What's the support like for that particular model? Like, who's supporting. You know, it's like open source problems that we have. Right. We are going to see more of this. And that's why I was saying earlier that we've been through this journey of people starting, or companies starting to use cloud without understanding what they actually wanted of cloud and also how to secure cloud. And third, the end users, what are they using within the cloud itself? Right, right. And the same problem is being replicated now in case of AI and in some cases also in case of new products that the cloud services are providing as well.
B
Well, and I felt like I was leading the witness a little bit, but I wanted to hear it from you because we keep seeing folks rushing into AI, rushing into deploying or pulling in a model that they don't fully understand or exposing parts of their data to different interfaces and they're not entirely clear where that's going, what's being trained. And I felt like I saw the movie before and it sounds like I have. It was just called the Cloud Transformation to Cloud. Let's go back to something you were talking about just a moment ago of I'm building something new and I desperately just want to see if I can get it built. And I'm not thinking about security. How do security leaders partner with engineering and product teams to move security Left. Right. Like, how do, how do you help those teams build security experiences by default? What are some successful tactics or stories that you think are models for our listening audience to try out for themselves?
A
So I think first of all, best security is security that's seamless that the user don't need to know about. Right. For example, if you look at like, you know, Google Chrome as a browser, most people don't think about updating it regularly or patching it and rebooting their machines and so forth. Right. It's done by default and you just press update and it happens. Right. And everything works as is so users don't have to think about so many types of attacks and everything that's going behind the scenes. So I think that's the best type of security. Like seamless, embedded people don't know, frictionless pretty much. Right. But that's tough to achieve and we see that all the time. I wouldn't say it's next to impossible, but it is quite difficult to achieve that. And what has worked for me throughout different organizations that I've worked for is being able to embed some of the security teams. Like some folks that actually, you know, work have worked in my team in the past companies or now. Right. They generally are embedded as part of the design process. But to do that you need to figure out and have that regular touch points so that you understand what's coming ahead in six months, for example, what's coming ahead in a few months in some cases. Right. And what's urgent sometimes. And you gotta be able to like embed your security teams within those organizations and the product teams so that they start building secure by design solutions. Right. So then if the teams actually understand how to implement something, then they could potentially support it. On the flip side, what also happens is, and this is a classic for every company is probably dealing with this, right. You can't have enough security engineers, power everything. Right. Be able to scale and support every single thing that's being built out. That's just not going to be possible. Right. So the hybrid approach we are trying to think at this point is building some sort of AI agent that's probably going to help you out in a lot of questions. Like you build a design, would the design be auto assessed and provide the guidance and requirements and everything like that. Right. And as we build this, if we become successful in this, I think that becomes an amazing model that that is going to be able to scale and support very, very large use cases across the board and you spend less times off your engineers and less resourcing from your side in order to support that. Right. And you actually end up and whatever you learn, you just go back and try updating the model itself rather than trying to tackle one use case at a time.
B
I've talked to a number of engineers about having an AI assistant look at your PRD or your, you know, product requirements doc. And then also look at your policy and then look at your code and how you've built things and make sure those three things come together and then flag when they don't or flag when there's a little extra work that needs to be done. And I look at that and I think, you know, that would be amazing. You gotta then protect the policies and the PRD and make sure that those aren't compromised. You know, maybe a different problem for a Future state. But it does seem appealing to be able to have an at your side, a security sidekick for every engineer that's running through and checking those things, flagging them early on before you go into a test or a production or, you know, you're out there and you're on the backside of a breach trying to figure out where did that problem come from when you could have known about it, you know, in some cases, you know, months, maybe years before. Okay, when we're looking across the hybrid and multi cloud environments, which is an area that you spend a bit of your time, where do you see the biggest UX driven security gaps today in.
A
A hybrid or a multi cloud infrastructure? I can specifically talk about UX experience problems for the security engineers themselves. Right. You have a control that you need to apply or think of compliance or security. Anyone, Right? You know what you need to do, you know your company's policy, you probably know your compliance frameworks that you need to adhere to. How do you apply that across multiple clouds? How do you apply that across on Prem versus cloud? I've run into this problem in the past, right. And you need to be able to understand multi clouds really well in order to implement those controls. Because all clouds don't work exactly the same way. The way you implement, let's say high end controls in a particular cloud infrastructure is different than the other cloud. The way you do that on Prem, where you have more control is different than how much control you give up on the cloud. So it's a problem for security engineers. And at the same time, how do you make sure those controls are applied by other teams? Because in a hybrid, let's just take an example of a Kubernetes cluster that scales, that runs on Prem and then scales to the cloud. How do you manage that? Right, because the networking stack is different in the cloud. So then you got to be able to work with those teams to design it well, so that they can build it. Right. So that it works across the board. Same consistent practices are built across the board on both the sites. Right. And that's how you solve some of that. I think the other important thing is to, is to also look at, you know, I think what I shared earlier. Right. Do you know the exact use case? Right. Cloud makes it super easy for anyone to do anything. Right. And unless you know exactly what, how do you secure cloud, how do your requirements actually map to a particular cloud provider or this or 200 plus services that they offer, you can't just be able to help the other teams secure it. Right. Because all they care about is, you know, launch the product and move on pretty much. So you got to be able to help through that. And it's a, it's a. For security engineers, it's a lot of learning and understanding. But again, you know the point I was making earlier, right. You gotta be able to start not think about how do I solve this, the entire problem, but think about which is the most important problem to be solved at this point, what is the baseline requirement for me? And then build up on that baseline. Rather than trying to think about every way I can secure an infrastructure, think about what's the baseline so you make it easier for the other teams to understand what the baseline is, maintain that, and gradually, over months and quarters and years, be able to improve security rather than trying to do everything all at once and impact the velocity and experiences for other teams.
B
Thanks so much for an awesome conversation Today. You've got this former designer fired up talking about the intersection between security, UX and risk. And then you get to apply that in the real world environments that you've been in at Google and some of the big financials that you mentioned earlier, all the way back to alarm.com days. So I really appreciate what you're working on. I'm sure that you've got hundreds of thousands of Googlers out there that appreciate what you're doing. They just don't know it because you're running in the background making an awesome user experience for all of those customers of your product, if you will.
A
Yeah, certainly appreciate the opportunity and thank you for your time.
B
Foreign. If you like what you've heard, please subscribe wherever you listen and leave us a review on Apple Podcast or Spotify. Those reviews and your feedback really do help me understand what you want to hear about. And if you want to contact me directly about the show, just send me an email@threatvectorloaltonetworks.com I want to thank our executive producer, Michael Heller, our content and production teams, which include Kenny Miller, Joe Benecourt and Virginia Tran. Original music and mix by Elliot Peltzman. We'll be back next week. Until then, stay secure, stay vigilant. Goodbye for.
A
Sa.
Episode: When Security Friction Becomes the Backdoor
Date: February 12, 2026
Host: David Moulton
Guest: Birat Nirola (Head of Security, Google Enterprise Networks)
This episode explores the often-overlooked relationship between security user experience (UX) and organizational risk. David Moulton hosts Birat Nirola, who shares his journey from building SOCs at startups to leading security at Google. The discussion centers on how security measures, if misaligned with users’ needs and workflows, can unintentionally become backdoors—hurdles that users bypass, thereby introducing vulnerabilities. The conversation also dives into practical zero trust implementation, balancing velocity and protection, and the evolving demands of securing large hybrid cloud environments.
The Core Challenge:
“When you create additional protections and add hurdles for a system user or end user, they would try bypassing it because it impacts velocity... that friction actually creates the risk of them bypassing and also provide an avenue for attackers.”
— Birat Nirola [13:12]
On Security by Default:
“Best security is security that's seamless that the user don’t need to know about.”
— Birat Nirola [22:28]
The Case for Embedded Teams:
“Embed your security teams within those organizations and the product teams so that they start building secure by design solutions.”
— Birat Nirola [23:10]
Lessons from the Past:
“We’re building the future without understanding the risks… I felt like I saw the movie before...[but] it was just called the Cloud Transformation.”
— David Moulton [21:18]
The conversation balances technical depth and practical insight, using real-world analogies and candid admissions of past mistakes. Both speakers are pragmatic—recognizing the constraints of large organizations, the inevitability of user workarounds, and the importance of empathy in creating secure but usable systems.
For those who haven’t listened:
This episode will give you a nuanced understanding of how user experience, risk, and security intersect at scale, the hidden dangers of "security friction," and practical strategies for making security an enabler—not a blocker—of business velocity.