
Loading summary
A
Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at meter.com CST.
B
Apple issues security updates after 20 days exploited in the wild Scammers trick popular AI search engines into recommending fake fake support numbers, torrent hides malware in subtitles and AI outperforms 9 out of 10 pen testers in Stanford hacking experiment this is Cybersecurity Today and I'm your host David Shipley. Let's get started. We start today with a broad security update from Apple after the company confirmed that two WebKit vulnerabilities were actively exploited in the wild. On Friday, Apple released patches for iOS, iPadOS, iOS, TVOs, WatchOS, Vision OS and the Safari Web browser. The updates address two flaws in WebKit, Apple's browser engine, both of which can be triggered when a device processes maliciously crafted web content. The first vulnerability, CVE2025 43 52. 9, is a use after free flaw that could allow arbitrary code execution. The second, CVE2025 14 174, is a memory corruption issue, with a CCBSS score of 8.8 indicating a high severity risk. Apple says that it is aware that the vulnerabilities may have been exploited in an extremely sophisticated attack against specific targeted individuals running versions of iOS prior to iOS 26. One of these flaws, CVE2025 14174, is the same vulnerability Google patched earlier last week in its Chrome browser. Google described the issue as an out of bounds memory access issue in the angle graphics library, specifically within the metal renderer. Apple's Security Engineering and Architecture team and Google's Threat Analysis Group were credited with discovering and reporting the vulnerabilities. Apple also credited Google's Threat Analysis group with identifying CVE2025 43 529. Because WebKit is used not only by Safari, but by all third party browsers on iOS and iPados, including Chrome, Edge and Firefox, the impact of these vulnerabilities extends across the mobile browsing ecosystem. Apple has released fixes across multiple platforms and devices, including iPhones, iPads, Macs, Apple Watch, Apple TV, Vision Pro, and Safari on macOS, the company says users should update to the latest available versions as soon as possible. With these releases, Apple has now patched nine zero day vulnerabilities exploited in the wild so far in 2025. Our next story comes from Wired reporting on a new technique scammers are using to manipulate AI powered search tools and in some cases steer users directly to fraudulent call centers. Scammers are poisoning the public web sources that large language models rely on, causing AI tools to surface fake customer support numbers as if they were legitimate. Researchers say the activity represents a growing security risk tied to how AI search and summarization systems gather information. The research was published on December 8 by Aura Labs, part of cybersecurity firm Aurascape. The team refers to the technique as large language model phone number poisoning. Rather than attacking AI systems directly, threat actors are manipulating the public content those systems scrape, including websites, reviews and comments so that fraudulent information becomes part of the data AI tools treat as trustworthy. In campaigns tracked by Aurascape, poisoned content was found influencing answers from tools including Google's AI Overview and Perplexity's Comet browser. In those cases, the systems were termed scam airline customer support phone numbers, presenting them as official contacts. The researchers say attackers are abusing both compromised high authority websites, including government and university domains, and public platforms that allow user generated content. That includes sites like YouTube and Yelp, where scammers can post optimized text, fake reviews or bot generated comments. Their goal is to ensure that this content is structured in a way that it makes it easy for AI systems to scrape, index and reuse. The approach builds on what industry traditionally calls search engine optimization but applies it to generative and answer based AI systems. Researchers describe this as generative engine optimization or answer engine optimization techniques now being repurposed to promote phishing and fraud. Once the poison content is in place, AI assistants merge information from multiple sources and present it as a single authoritative answer even when the underlying data includes fraudulent phone numbers. Orascape documented multiple real world examples. In one case when Perplexity was asked for the official Emirates Airlines reservations number, it returned a fully fabricated scam call center number. Similar results were observed when querying for British Airways support. Google's AI overview was also found returning multiple fraudulent phone numbers when asked for airline contact information, presenting them as legitimate customer service lines. Researchers warn this is not limited to a single AI model or vendor. They describe what they call a cross platform contamination effect where polluted sources spread across multiple AI systems. Because AI models blend legitimate and fraudulent content, the resulting answers can appear credible, making scams much harder to detect. Aurascape says users should treat AI generated contact information with caution and and independently verify phone numbers, especially when dealing with customer service, travel or financial requests. They also recommend avoiding the sharing of sensitive information with AI assistants and being mindful that these systems are still evolving and may surface unverified or manipulated data. Our next story comes from Bleeping Computer, and it's a reminder that malware distribution through pirated media is still very much alive, and in this case, increasingly sophisticated security researchers at Bitdefender have uncovered a fake torrent for the movie One Battle After Another that hides malware inside what appear to be harmless subtitle files. The Torrent claims to contain One Battle After Another, a Paul Thomas Anderson film released in late September starring Leonardo DiCaprio, Sean Penn, and Benicio del Toro. According to Bitdefender, the torrent attracted thousands of seeders and leechers, suggesting widespread distribution. What makes this case stand out, researchers say, is the complexity and stealth of the infection chain. The Torrent bundle includes a video file, image files, a subtitle file, and a Windows shortcut designed to look like a movie launcher. When that shortcut is executed, it triggers a series of Windows commands that extract and run a malicious PowerShell script hidden inside the subtitle file. Bitdefender says the script is embedded between specific subtitle lines, making it unlikely to be detected by casual inspection. Once executed, the PowerShell code extracts multiple AES encrypted payloads from the same subtitle file, reconstructing additional scripts that are dropped into a directory disguised as Microsoft's diagnostic data. Those scripts then act as a malware dropper, executing several stages. They create a hidden scheduled task for persistence, extract additional payloads from image files bundled within the torrent, and rebuild further scripts and batch files in a Windows Sound Diagnostics cache directory. The final stage checks for the presence of Windows Defender, installs the Go programming language if needed, and loads the Agent Tesla Remote Access Trojan directly into memory. Agent Tesla is a well known Windows malware family that has been active for more than a decade. It's commonly used to steal browser credentials, email and FTP logins, VPN details, and screenshots from infected systems. While the malware itself is not new, Bitdefender notes, it remains popular due to its reliability and ease of deployment. Researchers also say they've observed similar campaigns tied to other movie titles, sometimes using different malware families, including credential stealers like Luma. Bitdefender's recommendation is straightforward. Torrent files from anonymous publishers frequently contain malware, and pirating newly released movies carries a high risk of compromise. It's also just a reminder illegal with streaming services, costs continuing to rise while overall quality continues to decline across many platforms, the stage unfortunately is set for piracy to return and party like it's the early 2000s, according to a new study published this week by researchers at Stanford University. An AI agent named Artemis was able to hack Stanford's network over a 16 hour test period and outperformed nearly all human penetration testers involved in the experiment, according to reporting from Business Insider. Artemis was given access to Stanford's computer science network, which includes roughly 8,000 devices ranging from servers and desktops to smart devices. The AI was allowed to operate for 16 hours over two workdays, while 10 professional human testers were asked to contribute at least 10 hours of work. When researchers compared the results from the first 10 hours, Artemis placed second overall, outperforming nine out of the 10 human participants. Within that time window, the AI identified nine valid vulnerabilities with an 82% valid submission rate, according to the study. Some of the flaws had been missed entirely by human testers. In one interesting case, Artemis uncovered a vulnerability on an older server that human testers couldn't access because their browsers refused to load it. The AI bypassed this issue by loading a command line request. Instead, the researchers say Artemis works differently than humans. When it detects something potentially interesting, it automatically spins up additional sub agents to investigate in parallel. Human testers, by contrast, must examine targets one at a time. Cost was another major factor in the study. Running Artemis was estimated to cost about $18 an hour, while a more advanced version ran at about $59 an hour, far less than the annual salary of about $125,000 for a professional penetration tester. The researchers do note limitations. Artemis struggled with tasks that required navigating graphical interfaces and was more prone to false positives, sometimes mistaking routine network activity for successful intrusions. The findings arrive amid broader concerns about AI lowering the barrier to cybercrime. Recent reports have linked AI tools to phishing campaigns, fake identities and state linked hacking activity. Stanford researchers say their work highlights both the defensive potential and the growing risks of AI driven cyber capabilities. From my perspective, it's interesting to see this happen in a university network which frankly is going to have a lot of issues to find and exploit. I'd love to see a repeat of this experiment in a well defended environment such as a bank. The real danger here isn't about AI operating autonomously on its own. It's as we've seen from recent successful nation state attacks, it's how humans and AI can scale together to achieve more. As all of these stories show, the attack surface it's not shrinking anytime soon. It's evolving, growing fast, and it comes just as it and security teams face shrinking budgets, layoffs and rising expectations. All of this points to a challenging 2026 ahead for defenders and likely another good year for Attackers. We're always interested in your feedback. You can contact us@technewsday.com or leave a comment under the YouTube video please help us spread the word. Like subscribe, leave a review and if you enjoyed the show, please tell others. We'd love to grow our audience and we need your help. I've been your host David Shipley Jim Love will be back on Wednesday.
A
We'd like to thank Meter for their support in bringing you this podcast. Meter delivers full stack networking infrastructure, wired, wireless and cellular to leading enterprises. Working with their partners, Meter designs, deploys and manages everything required to get performant, reliable and secure connectivity. They design the hardware, the firmware, build the software, manage deployments and run support. It's a single, integrated solution that scales from branch offices to warehouses and large campuses to data centers. Book a demo@meter.com CST that's M E T E R.com CST.
Host: David Shipley (sitting in for Jim Love)
Episode Theme:
A roundup of the latest cybersecurity threats and responses, focusing on Apple zero-day vulnerabilities, the manipulation of AI-powered search engines by scammers, sophisticated malware in torrent files, and Stanford’s experiment with AI-driven penetration testing.
[00:22 - 03:42]
“Because WebKit is used not only by Safari, but by all third party browsers on iOS and iPadOS... the impact of these vulnerabilities extends across the mobile browsing ecosystem.” — David Shipley [01:57]
[03:43 - 07:25]
“Because AI models blend legitimate and fraudulent content, the resulting answers can appear credible, making scams much harder to detect.” — David Shipley [06:35]
[07:26 - 10:57]
“Torrent files from anonymous publishers frequently contain malware, and pirating newly released movies carries a high risk of compromise.” — David Shipley [10:23]
[10:58 - 13:23]
On Apple’s broad vulnerability impact:
“Because WebKit is used not only by Safari, but by all third party browsers on iOS and iPadOS... the impact of these vulnerabilities extends across the mobile browsing ecosystem.” — David Shipley [01:57]
On AI scam sophistication:
“Because AI models blend legitimate and fraudulent content, the resulting answers can appear credible, making scams much harder to detect.” — David Shipley [06:35]
On piracy and malware:
“Pirating newly released movies carries a high risk of compromise.” — David Shipley [10:24]
On AI pentesting:
“The real danger here isn’t about AI operating autonomously on its own… it’s how humans and AI can scale together to achieve more.” — David Shipley [13:01]
For more detailed stories and updates, visit Cybersecurity Today or reach out at technewsday.com.