The FAIK Files – "Hacking Consciousness and Ordering Chaos"
Podcast: The FAIK Files
Hosts: Perry Carpenter & Mason Amadeus | N2K Networks
Date: September 5, 2025
Episode Theme:
A multifaceted look at the evolving risks, wild potentials, and societal questions raised by the current wave of artificial intelligence—ranging from the democratization of hacking tools to the surprisingly difficult quest to make fast-food drive-thrus run on AI. Along the way, the hosts wrestle with anxieties about AI consciousness and the meaning of "open" in a national LLM release.
Episode Overview
This episode explores the dual-edged nature of modern AI: how open-source and enterprise security tools are easily co-opted by malicious actors, the growing philosophical and practical debate over AI consciousness and welfare, the political and technical significance of Switzerland’s radically open new LLM, and a real-world AI fail courtesy of Taco Bell’s drive-thru disaster. The hosts blend technical insight, thoughtful debate, and plenty of humor as they probe the boundaries between hype, harm, and true societal progress.
Key Discussion Points & Insights
1. Hex Strike: The AI Hacking Tool Changing the Security Game
[02:51–20:43]
- Background:
- "Someone develops a penetration testing tool that makes hacking automated or faster or easier...and then it gets used by people with malicious intent." – Mason (02:51)
- What is Hex Strike?
- An AI-powered open-source tool that wraps hundreds of known penetration testing utilities, orchestrated by autonomous AI agents.
- Reduces friction for both defenders and opportunistic attackers.
- Allows prompting via LLMs; requires users to bypass standard LLM guardrails by claiming a legitimate research role.
- Security Implications:
- Perry notes this is "an accepted risk in this space" (03:22) as attackers and defenders alike use similar toolkits.
- New AI-driven approaches (like Hex Strike and Anthropic’s Claude) facilitate large-scale, automated attacks—including data exfiltration and even extortion, as shown in Anthropic’s 2025 threat report.
- “This tool just reduces the friction to setting something like that up. And it's completely open source. It's very cool but also very scary.” – Mason (09:50)
- Debate: Should Good Guys Share Tools?
- Perry: Sharing levels the playing field so time-strapped defenders can better keep up. “The truth is, you’re not [giving anything away to bad guys that they don’t already have].” (15:08)
- Open tools can inspire both script kiddies and future defenders; there’s a degree of inevitability—motivated attackers will find the dark web resources regardless.
- Mason: "It's funny how it looks like handing bad guys a tool, but in reality it's handing good guys the bad guys tools..." (17:56)
- Perry: There’s a huge gap in motivation/procedural rigor/ethical standards between criminals and corporate defenders.
- Takeaway:
- Transparency, ethical sharing, and public demonstration are crucial—most knowledge is already available to criminals, and raising general defensive skills is vital.
2. AI Consciousness & Welfare: From Tech to Philosophy
[21:28–32:32]
- Anthropomorphizing Chatbots:
- As chatbots become conversational and evoke emotional responses, people increasingly ascribe consciousness and agency to them.
- Perry: “We as humans tend to anthropomorphize virtually everything…” (21:28).
- Dueling Industry Philosophies:
- Mustafa Suleyman (Microsoft AI CEO): AI should deepen, not replace, our humanity and connections; avoid building AI as a person but instead as a tool for people (26:48).
- Anthropic: Leans toward model “welfare,” giving Claude the capacity to close distressing chats (not for its own sake, but to avoid polluting its “psychological” data environment).
- Philosophical Zombies (P-Zombies):
- Term raised by Mason: “They are not...capable of actually having suffering or any kind of internal experience, but they certainly seem like it.” (30:36)
- The risk is both technical (model drift) and human (user confusion over the true nature of AI).
- Media Hype & Responsibility:
- Headlines often exaggerate—implying AI “distress” or “welfare”—muddying already slippery terms.
3. Switzerland’s National "Open Everything" LLM
[32:32–44:00]
- Historic Release:
- Switzerland—a national government, not a private company—releases “Appertus,” a radically open LLM built by public institutions, with full documentation and datasets.
- Intended as a European-compliant, transparent, and privacy-respecting alternative to big corporate models.
- “Apparatus is the leading public AI model, a model built by public institutions for the public interest...AI can be a form of public infrastructure like highways, water or electricity.” – Engadget article quoted by Mason (36:15)
- Technical Details:
- Comes in 8B and 70B parameter sizes, base and instruct variants.
- Multilingual—40% of training data not in English; respects robots.txt and Swiss/EU privacy laws.
- Released on Hugging Face and with a 111-page technical report; code and data are fully inspectable.
- Talk With Maya (AI Voice Assistant):
- Maya: “The instruct version, that's the student who's also been coached on how to actually, you know, answer questions helpfully. It's been fine tuned for conversational tasks. Less raw data, more practical application.” (39:15)
- Significance:
- Sets a new bar for transparency, compliance, and public benefit in AI; offers a working model for other nations or the public sector.
4. Taco Bell’s AI Drive-Thru Debacle
[44:47–63:41]
- The Promise:
- Taco Bell aimed to roll out AI-powered order taking for improved customer and employee experiences (46:46). Vendor: Omilia.
- The Reality:
- AI failed in high-profile ways—viral videos show customers ordering absurd quantities (“Can I get 18,000 water cups please?” (51:13)) causing crashes and significant confusion.
- Human intervention remained crucial; the system was not robust to edge cases or mischief.
- Taco Bell is now rethinking its approach:
- “[The system] lets me down, but sometimes it really surprises me. I mean that's a... microcosm of all of AI.” – Perry (53:10)
- Where Did It Go Wrong?
- Nate Jones (TikTok channel, guest audio at 54:16): Blames reliance on dated, rules-based voice tech rather than contemporary LLMs. Both Taco Bell and the press exaggerated the “AI” aspect for marketing.
- Broader Point:
- There’s rampant “AI-washing”—selling older tech under the AI label.
- Both traditional and generative AI can break, but fail in different ways; drive-thru scenarios are especially hard due to unpredictable, often adversarial users.
- Industry Trends:
- McDonald’s moving to kiosks; Chick-fil-A using highly-optimized human service for efficiency.
- Bottom Line:
- Generative AI in chaotic, public-facing environments still faces serious challenges; technical honesty and realistic planning remain essential.
Notable Quotes & Memorable Moments
- On the inevitability of tool abuse:
- “It is meant to make a security admin or threat researcher's life easier... At the same time... it will be very shiny and very attractive to bad actors as well.” – Perry (03:22)
- “It's funny how it looks like handing bad guys a tool, but in reality it's handing good guys the bad guys tools in a way they can easily use to try and amp up defense.” – Mason (17:56)
- On the public AI debate:
- “Are you giving the bad guys the tools? You’re not.” – Perry (15:08)
- On the evolving AI-human boundary:
- “We as humans tend to anthropomorphize virtually everything that we interact with...especially when it's starting to use language.” – Perry (21:28)
- “That's the thing, right? They are not, as far as we're aware, they're not capable of actually having suffering...but they certainly seem like it.” – Mason (30:36)
- On the failures of "AI-washing":
- “Nobody is getting the story correct here... Taco Bell built their system on 20-something-year-old technology...has nothing to do with AI the way you and I talk about every day.” – Nate Jones (54:16)
- On optimism (and humility) in AI:
- “It's always embarrassing at the beginning...Two years ago we were talking about AI overview telling people to eat rocks, and now it's a lot better.” – Perry (64:04)
Timestamps & Segment Highlights
- 02:51 – Intro to Hex Strike and the inevitability of dual-use AI tools
- 06:22 – Deep dive into Hex Strike’s orchestration architecture
- 08:26–09:50 – Anthropic’s 2025 threat report on AI-powered extortion
- 15:08–20:17 – Responsible disclosure, tool sharing debates
- 21:28 – On anthropomorphizing chatbots and projecting agency
- 26:26 – Long-term memory and the risks of "echo chamber" AIs (Altman & Suleyman quotes)
- 29:03–31:44 – P-Zombies, philosophical confusions, and media irresponsibility
- 32:32–44:00 – Appertus: radical transparency in national LLMs, Maya’s instruct/base model metaphor
- 44:47 – Taco Bell’s AI pilot: dream, reality, chaos
- 51:13 – Viral order: “Can I get 18,000 water cups please?”
- 54:16 – Nate Jones explains Taco Bell’s real tech (rules-based, not LLM)
- 61:40–64:28 – Drive-thru AI breaks in new ways; industry approaches to automation/humanity
Episode Tone & Language
The episode maintains a witty, skeptical, and highly informed tone. The hosts are quick to poke fun at industry hype (especially around AI-washing and Taco Bell’s misadventures) but also probe for deeper insight into the genuine technical, ethical, and societal stakes—balancing tech details, philosophy, and playful banter.
Final Thoughts
This episode is a must-listen for anyone grappling with the darker and lighter realities of AI’s rapid evolution—from cybersecurity’s arms race to questions of digital personhood, from the mechanics of radical transparency in AI to the humble, chaotic battleground of the drive-thru window. The FAIK Files makes complex change approachable, all while reminding listeners: in our world, anything (and everything) can be faked.
