CyberWire Daily — "A Spyware Swiss Army Knife"
Date: February 10, 2026
Host: Dave Bittner, N2K Networks
Main Theme:
This episode covers a surge of sophisticated cyber threats — from versatile mobile spyware (“Zero Day Rat”), attacks on critical infrastructure, and evolving malware, to the challenges of artificial intelligence (AI) safety in large language models (LLMs). It also features a prominent interview with Omer Akgul, RSA Conference researcher, about measuring LLM truthfulness and consistency, and recaps a notable legal case involving pen testers.
Key News Highlights & Analysis
1. Zero Day RAT: The Mobile Spyware ‘Swiss Army Knife’
[02:25]
- New Mobile Spyware: "Zero Day RAT" is a commercial spyware toolkit that can fully compromise both Android and iOS devices.
- First observed on February 2, 2026, and analyzed by Iverify.
- Sold via Telegram; comparable to nation-state level tools.
- Requires infection via malicious binary delivery (phishing, trojanized apps, social engineering).
- Features include:
- Passive Surveillance: Device profiling, app usage, account details, messages, precise location tracking.
- Active Surveillance: Camera, mic, screen recording, keylogging.
- Financial Theft: Clipboard-based cryptocurrency theft and banking credential harvesting.
- Attribution Challenges:
- Detection is difficult due to obfuscation.
- Infrastructure is decentralized, complicating takedown attempts.
2. Cyber Threats to UK Critical Infrastructure
[05:08]
- UK National Cyber Security Centre (NCSC) Alert: Infrastructure operators are warned to act immediately against "severe cyber threats," following coordinated malware attacks on Polish energy infrastructure in December.
- Jonathan Ellison, Director for National Resilience, stresses improved threat monitoring and hardened defenses via patching, MFA, and “Secure by Design” practices.
- NCSC sees such attacks as "realistic and potentially disruptive to everyday services."
3. Russia’s Escalating Censorship
[07:04]
- Increasing Telegram Restriction: Russia's communications regulator is further throttling Telegram, promoting a state-run "super app" (Max) and blocking Western platforms (Facebook, Instagram, X, YouTube).
4. FTC Warning on Foreign Data Sales
[07:56]
- FTC Enforcement: 13 data brokers warned under the PADFA law (Protecting Americans Data from Foreign Adversaries Act of 2024).
- Prohibits sale of sensitive US data (health, financial, biometric, geolocation, government IDs, military status) to adversarial nations (China, Russia, Iran, North Korea).
- Enforcement includes civil penalties of up to $53,000 per violation.
5. Stealthy “Deadvax” Malware Campaign
[09:01]
- Novel Attack Techniques:
- Starts with spearphishing and VHD files on IPFS to bypass filters.
- Uses highly obfuscated scripts and memory-only execution, deploying AsyncRAT encrypted within Microsoft processes.
- Fileless execution and extreme obfuscation make detection and response challenging.
6. Old-School Linux Botnet Revival
[10:17]
- New “Stalker” Botnet: Discovery of a Linux botnet using 2009-era tools (IRC bots, old kernel exploits, etc.).
- Estimated 7,000 infected mostly legacy Linux systems.
- Tactics and artifacts resemble older Romanian-linked groups, but may be copycat activity.
7. Critical Remote Support Vulnerability
[11:19]
- BeyondTrust RCE Flaw: CVSS 9.9 vulnerability in BeyondTrust remote support tools.
- ~8,500 exposed instances.
- No exploitation yet, but prior targeting by state-linked groups.
- Urgency in patching highlighted.
8. AI Safety: Is the US Moving Too Fast?
[12:17]
- Shift in AI Policy:
- New Administration prioritizes rapid innovation over strict regulation.
- Experts warn about risks of weak oversight and poor governance.
- Camille Stewart Gloucester: "Weak oversight can create real harm," referencing cases of disruptive, uncontrollable AI agents.
- Michael Daniel warns that lax US rules may hinder American firms abroad, especially in Europe.
- Mark Kelly: “Stronger safeguards could strengthen U.S. competitiveness.”
- AI Model Alignment Concern:
- Microsoft CTO Mark Russinovich's team showed that one unlabeled prompt can dismantle LLM safety controls.
- This flaw, “GRP obliteration,” means AI safety alignments can be fragile and open to sleeper backdoors.
Feature Interview: Omer Akgul (RSA Conference)
Topic: LLM Consistency Metrics in Cybersecurity
Segment Start: [14:04]
Goals of the Research
- Investigate if bounds can be placed on the truthfulness of large language models (LLMs).
- Explore how to measure LLM “consistency” as a proxy for accuracy and reliability.
Key Discussion Points
-
On LLM Hallucinations:
"They say stuff, but it’s pretty hard to fact check what they’re saying. They’re pretty confident in what they’re saying. And so they lie all the time. They call these things hallucinations."
— Omer Akgul [14:07] -
Consistency Defined:
"How likely is the model to produce the same output given a prompt? So say I ask it, tell me what two plus two is. How likely is it to say four each time? Or is it going to say something different?"
— Omer Akgul [15:19] -
On Measuring Consistency:
- Automated metrics exist, but differ from human judgment.
- Example: Numeric “4” and written “four” — close for humans, possibly inconsistent via automation.
"Turns out [automated methods] aren’t... the same way that humans would compare answers and say these answers are consistent."
— Omer Akgul [16:20] -
Core Findings:
- Automated consistency metrics often do not match human intuition.
- Combining different methods and calibrating with human intuition improves results.
"We noticed that flaw in prior work... We do find that if you combine a couple of these different methods and you calibrate it with human intuition... you can get pretty close to human numbers."
— Omer Akgul [17:31] -
Human-in-the-Loop Calibration:
- Initial human ratings collected; an auxiliary model learns human intuitions to estimate consistency.
"You can’t have a consortium of humans rate the answers of models every time... So we try to distill it down a little bit."
— Omer Akgul [20:05] -
Advice for Deploying LLMs in Critical Workflows:
- Always calibrate consistency metrics against real-world (human) performance.
- The level of calibration required depends on risk and deployment criticality.
"There needs to be some calibration going on to show that what your consistent metric is saying is what is actually going on in the real world."
— Omer Akgul [21:40] -
Looking Forward:
- Hopes for standardization and wider adoption of robust consistency calibration as domain-specific needs arise.
"I certainly hope more people will pay attention to the flaws that we’ve discovered and the solutions we’ve proposed. To make this stuff more robust for real world deployment."
— Omer Akgul [23:40]
Notable Quotes & Memorable Moments
-
UK Cyber Threats:
"Operators must act now to strengthen cyber defenses and resilience."
— Jonathan Ellison, NCSC [05:26] -
On Fast AI Innovation Risks:
"Weak oversight can create real harm... Poorly controlled AI agents disrupted customers and could not be easily shut down."
— Camille Stewart Gloucester [12:25] -
On LLM Safety Research:
"A single unlabeled training prompt can dismantle safety controls in large language models."
— Microsoft CTO Mark Russinovich [13:44] -
Pen Testers’ Legal Ordeal:
"Even authorized hacking can end in handcuffs. After years of litigation, the county settled days before trial... Sometimes testing defenses exposes a different vulnerability altogether."
— Host, re: Dallas County, Iowa case [25:40-26:12]
Feature Story: The Pen Tester Payout
[25:17]
- In 2019, two pen testers (Gary DiMercurio, Justin Wynn) were hired to test a courthouse’s security in Iowa and ended up arrested despite written authorization.
- Charges were dropped, but only after jail time and massive personal/professional stress.
- Dallas County settled for $600,000; the case is a stark reminder of the legal risks even for authorized cybersecurity workers.
Key Segment Timestamps
- Zero Day RAT Explainer: [02:25]
- UK/NCSC Infrastructure Warning: [05:08]
- Russia Censorship/Telegram: [07:04]
- FTC Data Broker Warning: [07:56]
- Deadvax Malware Tactics: [09:01]
- Linux Stalker Botnet: [10:17]
- BeyondTrust RCE Vulnerability: [11:19]
- AI Safety Regulatory Debate: [12:17]
- LLM Safety Alignment ("GRP obliteration"): [13:44]
- Interview: Omer Akgul on LLM Consistency: [14:04]–[24:27]
- Pen Tester Settlement Story: [25:17]
Summary
This episode illustrates the growing sophistication, reach, and diversity of cyber threats: from advanced mobile spyware to legal and policy dilemmas in cybersecurity and AI. The feature interview with Omer Akgul provides deep technical insight into the opaque problem of LLM consistency—crucial as AI becomes central in both offense and defense. The pen-tester legal saga underscores that even authorized actors can become ensnared in real-world system flaws and ambiguities. Listeners are left with a heightened awareness of the need for robust vigilance, nuanced AI safety practices, and ongoing dialogue between practitioners, regulators, and researchers in cybersecurity.
