Loading summary
A
You're listening to the Cyberwire network, powered by N2K. And now a word from our sponsor. The Johns Hopkins University Information Security Institute is seeking qualified applicants for its innovative Master of Science in Security Informatics degree program. Study alongside world class interdisciplinary experts and gain unparalleled educational research and professional experience in information security and assurance. Interested U.S. citizens should consider the Department of Defense's Cyber Service Academy program, which covers tuition, textbooks and a laptop, as well as providing a $34,000 additional annual stipend. Apply for the fall 2026 semester and for this scholarship by February 28th. Learn more at CS JHU. EDU MSSI Microsoft tags a critical vulner in Fortra's Go Anywhere software A critical Redis vulnerability could allow remote code execution Researchers tie Bieta to China's MSS Technology Enablement Competing Narratives Cloud the Oracle E Business suite Breach, an Ohio based Vision Care firm, will pay $5 million to settle phishing related data breach claims. Trinity of Chaos claims to be a new ransomware collective. LinkedIn files a lawsuit against an alleged data scraper. This year's Nobel Prize in Physics recognizes pioneering into quantum mechanical tunneling. In today's Industry Voices segment, we're joined by Alistair Patterson from Harmonic security discussing shadow AI and the new era of work and Australia's AI authored report gets a human rewrite it's Tuesday, October 7th, 2025. I'm Dave Buettner and this is your Cyberwire Intel Brief. Thanks for joining us here today. It's great as always to have you with us. A critical vulnerability in Fortra's Go Anywhere managed file transfer software is being exploited in ransomware attacks, Microsoft has warned. The flaw, with a maximum CVSS score of 10, allows attackers to bypass license signature verification and achieve remote code execution on vulnerable systems. Exploitation requires no authentication if attackers can forge or intercept valid license responses, posing significant risk to Internet facing instances. Microsoft linked the zero day activity to threat group Storm 1175, which used legitimate remote monitoring tools, network scanners and cloudflare tunnels for command and control before deploying Medusa ransomware. Though Fortra patched the flaw on September 18, hundreds of exposed Go Anywhere servers remain. Microsoft urged immediate patching network perimeter reviews and running endpoint defenses in block mode. A critical vulnerability in Redis could allow attackers to gain remote code execution on affected systems. Redis, short for Remote Dictionary Server, is an open source in memory data structure store that's widely used as a database cache and message broker. The flaw, with a CVSS of 10, stems from a 13 year old use after free bug in Redis LUA scripting feature, which is enabled by default, authenticated attackers can exploit it to escape the LUA sandbox, trigger memory corruption and establish a reverse shell for persistent access. Researchers at Wiz who discovered the issue and dubbed it Redishell, was warned that over 330,000 Redis instances are exposed online, with at least 60,000 requiring no authentication. Exploited systems risk data theft, ransomware or crypto mining. Redis has issued patches for all supported versions and urges immediate updates, especially for Internet facing servers. A new report from Recorded Futures Insect Group says the Beijing Institute of Electronics Technology and Application, or byeta, is almost certainly affiliated with China's Ministry of State Security. Researchers assess BYIETA is very likely MSS led and likely a public front for the MSS First Research Institute. Public sources indicate Bieta researches steganography, communications and forensics and collaborates with mss. Run University of International Relations personnel histories, including links to CNITSEC reinforce the assessment activities likely aid intelligence, counterintelligence and military missions. The research concludes. BIETA almost certainly forms part of a broader MSS enablement network engagement with them risks technology transfer, covert communications support and strengthens cyber espionage tradecraft export controls. Academia and vendors should review ties and conduct strict due diligence. Reports of active exploitation targeting Oracle E business suite have sparked widespread confusion and competing narratives across the cybersecurity community. Over the past week, vendors and researchers have offered conflicting explanations ranging from password issues to credential reuse to an alleged zero day, each claiming to have identified the true root cause. Analysis by Watchtower Labs claims that the attacks involve a remotely exploitable flaw that allows unauthenticated code execution across multiple Oracle EBS versions. The report calls for restraint, criticizing speculation that fueled panic and misinformation before Oracle's official advisory. The incident highlights how rumor and premature attribution can undermine coordinated response to during active exploitation. Clear communication and evidence based reporting remain vital as security teams assess exposure and await further clarification from Oracle and trusted researchers. Ohio based EyeMed Vision Care will pay $5 million to settle a class action lawsuit over a 2020 phishing related data breach affecting its email system. The settlement provides compensation for affected members and including up to $10,000 for documented losses and smaller payments for time and inconvenience. EyeMed will also implement new security controls such as enhanced multi factor authentication, stricter password policies, employee training and third party HIPAA risk assessments. The company denies wrongdoing but agreed to improve its cybersecurity posture as part of the resolution. A new Tor hosted leak site run by the Trinity of Chaos ransomware collective allegedly tied to Lapsus, Scattered Spider and Shiny Hunters, lists 39 major companies and claims more than 1.5 billion records across 760 firms Re Security reports. Rather than announcing fresh intrusions, the group published previously undisclosed data from past breaches and has threatened Salesforce, alleging massive corporate data holdings. Salesforce denies new vulnerabilities. Sample data reportedly contains significant, personally identifiable information but few passwords, suggesting access via stolen OAuth tokens and vishing tied to third party integrations. The FBI issued an alert to help detect similar compromises. The Leak site faces DDoS attacks and set an Oct. 10 negotiation deadline. Experts warn further releases could spur phishing, identity theft and AI driven abuse. LinkedIn has filed a lawsuit against Delaware based Pro APIs Inc. And its founder, accusing them of creating over 1 million fake accounts to scrape user data and sell access via a tool called iscraper API. The company seeks a permanent injunction, data deletion and damages. LinkedIn alleges Proapis charged up to $15,000 per month for large scale scraping, violating its terms of service. The suit also names a Pakistan based partner, Netswift. LinkedIn says it will continue aggressive legal action to protect member data. John Clark, Michelle DeVoret and John Martinez have been awarded the 2025 Nobel Prize in physics for pioneering research into quantum mechanical tunneling, a phenomenon fundamental to quantum computing and modern electronics. Clark of UC Berkeley said the award was the surprise of his life, adding that their collective work underpins technologies like smartphones. The Nobel committee praised their discoveries for advancing quantum cryptography, computing and sensing, calling them vital to the next generation of digital innovation. This year's physics Prize marks the 119th Nobel Award, carrying a cash prize of about $1.2 million. Other Nobel announcements continue throughout the week, with the awards ceremony set for December 10th in Stockholm. Sadly, there's still no Nobel for podcasting. Coming up after the break, my conversation with Alastair Pat from Harmonix Security. We're discussing shadow AI and the new era of work and Australia's AI authored report gets a human rewrite. Stay with us at talas. They know cybersecurity can be tough and you can't protect everything, but with Thales, you can secure what matters most. With Thales industry leading platforms, you can protect critical applications, data and identities anywhere and at scale with the highest roi. That's why the most trusted brands and largest banks, retailers and healthcare companies in the world rely on Thales to protect what matters most applications, data and identity. That's Talas. T H A L E S Learn more@talasgroup.com Cyber what's your 2am Security worry? Is it, do I have the right controls in place? Maybe are my vendors secure? Or the one that really keeps you up at night? How do I get out from under these old tools and manual processes? That's where Vanta comes in. Vanta automates the manual work so you can stop sweating over spreadsheets, chasing audit evidence and filling out endless questionnaires. Their trust management platform continuously monitors your systems, centralizes your data and simplifies your security at scale. And it fits right into your workflows. Using AI to streamline evidence collection, flag risks, and keep your program audit ready all the time. With Vanta, you get everything you need to move faster, scale confidently and finally get back to sleep. Get started@vanta.com cyber that's v a n t a dot com cyber Alistair Patterson is from Harmonic Security and in today's sponsored Industry Voices segment we discuss shadow AI and the new era of work. So today we are tracking some of the changes that we're seeing in workplaces, particularly as a result of AI driven tools. And I know you and your colleagues there make the point that there's kind of a new presence on people's desktops these days. It's not just Microsoft Word and a browser anymore.
B
Yeah, that's right. I mean, I grew up in that world where everyone got their work done essentially on the Office suite and email. And then SAS came along and then I think the biggest change that I've ever seen is occurring right now, which is that a lot of people start and finish a number of work activities in these AI chatbots and agents and other applications that are coming along very fast. I think this is a generational shift, of course, as as many have said before me, with a lot of profound implications both for how we work, but also how we think about security.
A
Well, for the employees themselves, how does this shift show up in their day to day work?
B
Yeah, I mean, I think previously, you know, you'd have the Google search bar there, you'd be writing a document or an email and you would use, you know, that standard set of tools that we've all got so used to. But I think now, first of all, the likes of ChatGPT, it's just been the most incredible growth that it has seen through the workplace. And whether employers are facilitating the adoption of AI or not, it is happening everywhere. And what that means, typically in most people's day, as I'm sure it is in yours and mine. One of the things we think about first when we've got a new problem to solve is, hey, can I use an AI for this? That might save me a whole bunch of time in whatever it is that I'm doing, whether it's researching something or summarizing something, or even being aspiring partner in learning something. I find it can be very, very effective. So there's a great. We see this in the activity that we witnessed. There's just a great shift underway where very much more of our job is interacting with these AI agents and chatbots and other applications that are being built for the enterprise.
A
Well, let's dig into some of the security implications here. I know you've said that there's no true control plane for AI usage today. What does that mean in practice for organizations?
B
Organizations are in a tough spot because they clearly are under a lot of pressure to adopt AI as fast as possible and not be left behind competitively. And so every board, every CEO is sort of carrying that same message of, of being an AI leader and pushing into AI. But then at some point the security team and trust and compliance get involved and they start to think more about, well, hey, where does our sensitive data go in this scenario? What is being adopted and where are our employees putting our data? That tension exists everywhere right now. And the problem is that traditional controls are just not set up for this era. I mean, we went through obviously the SaaS era. Most recently we have, you know, web gateways and SASE CASB capabilities, but they were designed for a different era. And the problem now is that they typically don't see the prompt level data, the use cases around that, and how AI is being used by employees and where the data is going beyond a list of URLs. And so really trying to understand contextually what are the employees doing and is that something that's high risk or not? And where, you know, where's the ROI on my tools even is something that most companies struggle with. They're sort of rushing into deployment or I think worse is when they actually just try to block access and then employees find ways around anyway.
A
Well, are you finding that companies are trying to retrofit old security technologies for AI?
B
Yeah, I think it's the natural first place to look. Right. Because nobody wants yet another tool if they can avoid it. And so they'll look at the, you know, SASE CASB world first of all. And maybe they want to revisit DLP as a control plane, which strikes you know, fear into the heart of most security professionals for many reasons, as you know. And then they, you know, they figure out that trying to get visibility there is challenging as well, because you get. You get a kind of URL list and not much, much more. And then the other area is, you know, Microsoft will say, hey, go and label everything with Purview. And that's a pretty big challenge for most security teams to find all the data and label it. And even if you do, the challenge is, well, what's going into the prompt data, which is not necessarily files that can be easily labeled. And when we try and apply the last era's DLP style, PII detection, credit cards and Social Security numbers and things like that that are easily matchable, that only tells a small part of the story here. There's lots of other very sensitive business information that's getting put into these chat applications that in aggregate, in particular, could be very damaging. It could be outlining M and A events, or it could be legal action, or it could be to do with layoffs and personnel changes and HR issues. And every industry has some slightly different nuances to it, but essentially, there's a ton of sensitive corporate data that's getting put into these engines. And we shouldn't be trying to stop it here because there's huge benefits being gained in letting your employees use these tools, too. So it's finding that balance. But I think for sure, the tools that were designed for the last era are not fit for this era, as we've seen time and time again in security.
A
Well, some organizations are trying to block AI tools altogether. And you make the case that that's unsustainable.
B
Absolutely. I mean, I think it's just so apparent that if you try to stop employees using these tools, I mean, they're all using it in their personal lives now. And so when they come in the workplace, they expect to be able to get access to this. And the strategy that I see from a lot of companies is to say, well, okay, here's our AI policy. Part one, Part two is we've put in place our AI steering committee. Okay. But it hasn't typically got very good visibility into how AI is being used and adopted, because it's usually just looking at the SASE tool. And then part three, when it comes to control, well, no one really wants to put DLP in place or. Or deal with labeling if they can avoid it. So they do often go with that blocking approach. And the problem is the employees tend to find ways around those controls. They get frustrated. The Security team ends up in exception hell having to approve lots of apps for different teams in different ways. And we're back to security being the department of no again. And to give you just one anecdote, I was talking recently to the head of AI actually at a pretty major insurance company in the US and he said to me, hey Al, I don't have access to ChatGPT and I'm the head of AI. And I said, well, what do you do? And he held up to the camera a laptop, and he said, well, I use this laptop instead, which is his personal laptop. And then he said, and so does my team. And that was his way of dealing with that corporate block. But we see it everywhere and no one really wants to use the corporate mandated versions necessarily. I mean, there's one other customer working with this time in Europe where, where they, yeah, they deployed Harmonic and discovered that they'd mandated Microsoft Copilot as the AI tool of choice and tried to point all the employees at that. But they, and they bought a lot of licenses as well, so spending a lot of money. They had four times as many users of the free ChatGPT edition than they did with the corporate Copilot. It's just staggering, right? And we also see, interestingly, even where you've got paid ChatGPT, about 40 to 50% of the data loss we see is going through personal accounts into ChatGPT. So even when the corporate one's available, often employees are opting to use their Gmail personal logins for their own accounts, maybe because they have other information in there already and they're used to it and that sort of thing. But yeah, it's very interesting how this adoption journey is going and I think just blocking things is never going to be the right answer.
A
Yeah, it's interesting that, you know, we've always talked about shadow it, but I guess shadow AI is kind of a subset of that now.
B
That's right. I mean, and I think AI is, is everywhere. I sort of reject the notion that, you know, as you have in the SASE world, this sort of AI category of 300 apps that's supposedly all things AI, because I think essentially every enterprise app is building LLMs in the back end at this point. So I, I think about it being more we were pre Genai era and now we're, we're kind of post gen era and this is just the new reality and how we handle it is the next question.
A
Well, looking at the companies who are having success here, can you describe what sort of things they're doing. What does it look like?
B
Yeah, I think the winning approach here is to try to work with the employees and meet them where they are, understanding the use cases, understand why they're using certain tools and get that visibility, that picture of what's going on so that you can put the appropriate controls in place. I think the blanket block is not good because it just pushes people outside of your monitoring and they inevitably adopt these tools anyway. And it causes all the frustration that I talked about. Equally, having a completely permissive policy isn't great either because you're accepting a pretty major risk. Whether it's customer data going to China hosted apps or your IP becoming part of someone else's training data set, those are often risks that companies are pretty concerned about. So I think the best thing to do here is to try to get the visibility into how AI is being adopted and used today. You may find, for example, people instead of using something like Copilot, they're using, let's say Gamma AI for presentation generation or Beautiful AI or Napkin AI or you know, in every category there's a lot of these new apps that are popping up and then go and find out why. Right. What's the requirements gap? Do we need a dedicated control in that space? Can we standardize on one and put an enterprise agreement around it? Or do we want to block and redirect them in this case because we think really they should be using Copilot or whatever it might be. But at least then you're having the conversation with them, meeting them where they are. We're not the department of no anymore. We're facilitating and hopefully accelerating AI adoption. I think that's one area that is interesting to me psychologically is that some employees self censor. So instead of leaning into these tools, they're worried about using them at work and they're sort of holding back a little bit. And so I think you've kind of got to give them permission, encourage them to lean in and meet them where they are. And that way I think security can become an enabler again and the business is going to benefit overall.
A
What about for the IT teams and the security folks at the organization to encourage them to have, have a helpful approach here, as you say, to not be the department of no.
B
That's right. I mean, I think there's a bit of education here. It's pretty tough because it's not that security teams were not overstretched already. Right. There was enough going on and now they have to become AI experts on the side as well. Which is not great. There's just so much buzz around this space as well. There's the threats from AI, which is one area. There's using AI for security in the soc, which is another area. Then there's the area around building your own AI and trying to protect that and your own apps and so on. But I think the, you know, the fourth area is where I think it's the most immediately interesting and applicable, which is, you know, how do we enable the employees to use AI safely and securely and put those appropriate guardrails around? And I think that again, starts with re visibility and then understanding the needs of the business and meeting the business at that point and making sure that instead of the parliament and no, we're saying, yes, use AI, but do it with appropriate guardrails, you know, and that starts with a policy. I think everyone's got a policy now, pretty much. And then, you know, you get your steering committee together. But I think you then need to feed the steering committee some, you know, some proper data and visibility into what's going on so that you can then make the appropriate controls and guardrails and have those in place around the business.
A
Are there any common mistakes that you see organizations making here?
B
There's probably four buckets of all that I see. I think there's a set of companies that are just very permissive and they don't particularly care about their data. There's not too many of those, but they are out there and I think they're pretty wide open and that is what it is. Then there's the opposite extreme. You've got the ones that are just in heavy block mode and just saying no to all things AI and trying to block everything. And I think outside of national security areas, probably that's overkill in most cases and ends up being counterproductive because I think, as I said, you're going to drive the behavior just outside your monitoring, which is not helpful. People use their own devices, they disable or go around the controls and that's not a good place to be either. And then I think in the middle, it's the ones that are either right now, they're very permissive, but they're worried about the risk and they want to put some controls around it. I think that makes some sense that you've got to lean in. You're trying to enable your employees. You've always had that attitude, but you are worried about the risk. That's bucket three and then bucket four. I see probably the most, which is companies that are currently in Block mode, but are desperately trying to become more progressive while managing the risk. And I think that's the, that's probably the key category. Companies that care about their data deeply, but they don't want to sit out this, this whole AI transformation. They've got to get in the middle of that. And again, that comes back to, I think, having the right guardrails, putting the right controls in place, but ultimately leaning in and enabling the employees.
A
As we look towards the future here, where do you suppose this is going to take us? What, what do you suppose people's working relationship with AI is going to look like a few years down the line?
B
You know, of course there's a lot of hype around agents right now. I think the reality, I think for me if both agents and AI more generally is that rather than companies themselves building tons of this stuff in house, I really think this is going to be mostly a use of third parties challenge because I think you've got so many well funded, dedicated teams in Silicon Valley and elsewhere that are building out for every conceivable kind of vertical and use case at the moment in using AI and agents more generally. And so I think probably what we're going to see is, yeah, employees are going to be making use of agents, but it's going to be mostly third party stuff. I think the enterprise thinks it can dictate how AI is getting deployed, but I think the reality is that the employees are going to be mostly dictating that by what they use, what applications and services they use externally. I think the majority of that is going to go through the browser, as it has done so far. If you look at the use of AI and agents today, it's almost all browser based and we even have the agentic browsers now like Comet and Deer and others, which are a good first step in this direction. So yeah, I think it's gonna be more, more browser based usage by employees of third party AI agents and apps would be my, my quick summary of that. And then I think there's a, there's a whole other debate around where engineering is going from here. And I think for sure there's a place for these, obviously the AI engineering environments with Cursor in the lead, but Windsurf and many others in the mix do.
A
That's Alastair Patterson from Harmonic Security.
B
When did making plans get this complicated? It's time to streamline with WhatsApp, the secure messaging app that brings the whole group together, use polls to settle dinner plans, send event invites and pin messages. So no one forgets mom 60th and never miss a meme or milestone. All protected with end to end encryption. It's time for WhatsApp message privately with everyone. Learn more@WhatsApp.com this episode is brought to you by Indeed. When your computer breaks, you don't wait for it to magically start working again. You fix the problem. So why wait to hire the people your company desperately needs? Use Indeed sponsored jobs to hire top talent fast. And even better, you only pay for results. There's no need to wait. Speed up your hiring with a $75 sponsored job credit@ Indeed.com podcast terms and conditions apply.
A
And finally, Deloitte has agreed to refund part of a $440,000 Australian government contract after admitting that a report it produced was, shall we say, a little too imaginative. The Department of Employment and Workplace Relations discovered that its commissioned analysis contained fake citations, phantom footnotes, and even a fabricated court judgment, courtesy of a large language model enlisted to tidy up the paperwork. Officials insist the substance remains intact, though the confession reads like a case study in modern due diligence gone missing. Increasingly, AI is slipping into serious policy work, performing assistive tasks that somehow leave fingerprints of fiction. The irony, of course, is that this technology is being sold as a tool for efficiency and truth, yet keeps demonstrating a flair for creative writing. The quiet weekend upload of the corrected version suggests that the machines aren't the only ones generating artful evasions these days. And that's the Cyber Wire. For links to all of today's stories, check out our daily briefing@thecyberwire.com we'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity security. If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the show notes or send an email to cyberwire2k.com N2K's senior producer is Alice Carruth. Our Cyberwire producer is Liz Stokes. We're mixed by Trey Hester with original music by Elliot Peltzman. Our executive producer is Jennifer Ibin. Peter Kilby is our publisher and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow. Foreign Cyber Innovation Day is the premier event for cyber startups, researchers and top VC firms building trust into tomorrow's digital world. Kick off the day with unfiltered insights and panels on securing tomorrow's technology. In the afternoon, the 8th annual Data Tribe Challenge takes center stage as elite startups pitch for exposure, acceleration and funding. The Innovation Expo runs all day, connecting founders, investors and researchers around breakthroughs in cybersecurity. It all happens November 4th in Washington, D.C. discover the startups building the future of cyber. Learn more@cid.datatribe.com.
This episode of CyberWire Daily covers the latest urgent cybersecurity threats, including a critical GoAnywhere MFT flaw being actively exploited for ransomware, a newly discovered Redis vulnerability, evolving state-sponsored cyber operations, and ongoing challenges around AI in the workplace. The episode also features an in-depth "Industry Voices" interview with Alastair Patterson from Harmonic Security, focusing on the rise of shadow AI—a phenomenon where AI-driven tools proliferate in organizations outside traditional IT controls, creating both opportunities and risks. The episode maintains a brisk, informative tone while diving into both high-level trends and hands-on advice for security professionals.
Guest: Alastair Patterson, Harmonic Security ([13:31]–[28:47])