Threat Vector Podcast: "The Adversarial Hacker Mindset"
Date: December 11, 2025
Host: Michael Heller (Palo Alto Networks)
Guests: Greg Conti (Principal at Copidian), Tom Cross (Threat Researcher at Getreal; Principal at Copidian)
Event: Special episode recorded at DEFCON 26
Episode Overview
In this in-depth conversation, Michael Heller interviews Greg Conti and Tom Cross about "The Hacker Mindset," specifically honing in on their DEFCON talk, “Dark Capabilities: When Companies Become Threat Actors.” The discussion explores how companies possess untapped or unexploited capabilities that, intentionally or unintentionally, could be used maliciously by themselves, governments, or other bad actors. The guests advocate for adversarial thinking—understanding and analyzing technology from the perspective of a would-be attacker—to better defend against real-world threats. This episode highlights the importance of candid discussions about ethics and unintended uses, which are often uncomfortable but critical to meaningful cybersecurity progress.
Key Discussion Points
1. What Companies Can Do vs. What They Should Do
- Capabilities Spectrum
- Companies have a set of known capabilities, some intentional, some unexploited, and some unknown even to themselves.
- “What if we decided we wanted to be evil? How, in what ways could we be evil?”
— Greg Conti [02:46]
- Assessment Exercise
- Self-evaluation: Flip the script with an "evil hat" to audit all possible misuses of your own tech.
- This preemptive assessment can prompt companies to build safeguards before outside actors identify and exploit these dark capabilities.
2. Governments and Company Capabilities in Times of Conflict
- Commandeering Corporate Tools
- Governments might exploit corporate capabilities during conflict—sometimes with consent, sometimes by compulsion (e.g., Defense Production Act).
- Real-life example: Companies independently making decisions (e.g., shutting off satellite systems) that affect geopolitical events.
- Strategic Alignment
- Governments need to consider not only how they might use these capabilities, but also the risk of adversaries (outside states) or company insiders acting counter to their strategic interests.
3. Openness and Ethical Dialogue at DEFCON
- Contrast with Other Security Conferences
- "The conference was very uncomfortable with us having that conversation. They asked us to remove the slide."
— Tom Cross [05:33] - DEFCON, however, allows for challenging, ethically nuanced debates.
- "The conference was very uncomfortable with us having that conversation. They asked us to remove the slide."
- Vitality of Hacker Perspective
- The hacker community’s curiosity and adversarial mindset are crucial to uncovering, understanding, and mitigating risks that arise from unanticipated uses of technology.
4. Practical Examples: The “Evil Robotic Vacuum”
- Exploring Extreme Use Cases
- "We had some fun with an evil robotic vacuum. What could evil robotic vacuum do?...It could listen to all your conversations and report back for ideological compliance. It's literally a vacuum. So it could be harvesting DNA."
— Tom Cross [08:47] - Real-world research found vulnerabilities in robotic vacuums: poor data deletion, AI face recognition code, more data collection than disclosed.
- "We had some fun with an evil robotic vacuum. What could evil robotic vacuum do?...It could listen to all your conversations and report back for ideological compliance. It's literally a vacuum. So it could be harvesting DNA."
- Broader Point:
- Everyday devices increasingly have latent capabilities. The gap between user perception (what the device seems to do) and capability (what it actually can do) is immense.
5. Adversarial Thinking: Mindset and Teachability
- From Red Teaming to Corporate Responsibility
- Adversarial thinking isn’t just for security professionals; benefits extend to sales, marketing, and product development.
- “Developers for sure…they’re not thinking like an adversary.”
— Tom Cross [14:07]
- Anthropological Approach
- “The mindset of the hacker scene is anthropological…you don’t know what this thing is supposed to be. You have to discover that…The gap between what the thing really is and what it was supposed to be is…where a lot of interesting capabilities or security vulnerabilities exist.”
— Greg Conti [16:20]
- “The mindset of the hacker scene is anthropological…you don’t know what this thing is supposed to be. You have to discover that…The gap between what the thing really is and what it was supposed to be is…where a lot of interesting capabilities or security vulnerabilities exist.”
6. Designing for Security: Technical and Institutional Measures
- Technical Controls
- Remove unnecessary capabilities.
- Build architectures that either make misuse difficult or highly visible (transparent).
- Institutional Controls & Accountability
- Instituting processes so that any questionable activity is widely known internally.
- Third-party audits and mechanisms like “warrant canaries” to instill external trust.
- “If you’re running a social media site, you might put something out there that says, ‘I’ve never had to respond to a warrant for which I was prohibited from disclosing.’”
— Greg Conti [20:43]
- “If you’re running a social media site, you might put something out there that says, ‘I’ve never had to respond to a warrant for which I was prohibited from disclosing.’”
7. Scale and Future of Security Challenges
- Expansion of Attack Surfaces
- Increasing prevalence of embedded systems and IoT devices means more “superpowers” for every company—think dating sites, industrial control systems, etc.
- Myth of Automated Solution
- Debunking the idea that AI-generated code will fix vulnerabilities; AI replicates human errors at scale.
- DEFCON’s continued growth indicates the persistent and expanding need for adversarial security research.
Notable Quotes & Memorable Moments
-
On defining dark capabilities:
“If you imagine a social networking, you think, ‘Oh, I can connect with people...’ But the social networking site knows every direct message ever sent...they only expose a little tiny fraction. So many people think that that is the full end state of what that company can do, when in reality it’s like 0.01%.”
— Tom Cross [03:46] -
On uncomfortable conversations:
“We’re allowed to wade into these ethically challenging discussions…put on the black hat and look at things from that perspective...Any tool has both malicious and beneficial uses.”
— Greg Conti [05:44] -
On adversarial thinking as teachable:
“We found that adversarial thinking is teachable. We have people cheat in class and cheat on a test and some other things, but at the end of it, they're better tuned.”
— Tom Cross [13:18] -
On mindset:
“William Gibson said, the street finds its own uses for things… the gap between what the thing really is and what it was supposed to be is…where a lot of interesting capabilities or security vulnerabilities exist.”
— Greg Conti [16:20]
Timestamps for Key Segments
- [02:17] — Introduction to the “Dark Capabilities” talk and its motivation
- [03:46] — Hidden company capabilities and real-world implications
- [05:33] — Ethical discomfort at traditional conferences; DEFCON’s openness
- [08:30] — Evil robotic vacuums: practical thought exercises
- [13:00] — The “Ulysses Pact” and embedding constraints
- [14:07] — Who should practice adversarial thinking in companies
- [16:20] — Anthropological, discovery-oriented hacker mindset
- [17:41] — Antivirus software as hypothetical surveillance system
- [20:43] — Warrant canaries and transparency
- [23:35] — Growing need for security as tech complexity increases
- [23:46] — Final takeaways: Value of adversarial, open dialogue
Big Takeaway
“Hackers are good at seeing that distinction between what things are and what they were meant to be, and figuring out how they can utilize things in ways that were not intended and may not be wanted. It’s this sort of adversarial mindset...applied in situations like this...can be turned to good... By exposing these uncomfortable truths, we actually make things safer.”
— Greg Conti [24:10]
Summary
This episode highlights how adversarial thinking—questioning what systems really do versus what they’re advertised to do—is essential for defenders and builders of modern technology. Companies, governments, and security pros must honestly examine their own dark capabilities, anticipate misuse, and embed institutional and technical guardrails. Comfort with hard, ethically ambiguous questions, paired with interdisciplinary input, is essential as technology grows more powerful and pervasive. The hacker ethos—curiosity, openness, and willingness to explore “what if?”—remains the strongest bulwark against future threats.
