Hacking Humans – "When AI wears a suit and tie"
N2K Networks | March 19, 2026
Overview
This lively episode explores the evolving landscape of deception, influence, and social engineering in cyber crime. The hosts—Dave Bittner, Joe Kerrigan, and Maria Varmazes—dig into recent breaches, deepfake developments, and scam trends, with a focus on how criminals are blending familiar tactics with new technology, such as advanced voice phishing (“vishing”) and generative AI. The team’s banter wraps technical insights with humor and anecdote, making cybersecurity accessible without downplaying real-world risks.
Key Discussion Points & Insights
1. Aggravated Identity Theft: Legal Clarifications
- Discussion (00:44–02:11):
Joe clarifies, thanks to listener Michelle, that "aggravated identity theft" specifically refers to cases where stolen identity is used to commit another crime (e.g., wire fraud), rather than "vanilla" identity theft. - Notable Quote:
"If you use that [stolen identity] to commit something like wire fraud in that person's name, then that becomes aggravated identity theft." – Joe Kerrigan (01:35)
2. Listener Feedback: Shared Mailboxes in Cybersecurity
- Segment (02:11–05:38):
Maria reads a detailed listener email that explains shared mailboxes (common in business environments), their licensing nuances, and associated risks.- Shared mailboxes often don’t require a license and are widely used for group accounts (HR, Accounts Payable, etc.).
- Enabling mailbox login for multiple users is not best practice; assigning usernames/passwords and sharing them undermines security.
- Storage and feature limits (e.g., 50 GB for unlicensed mailboxes) may prompt bad practices.
- Memorable Moment:
“From a control standpoint, assigning an ID and a password and then sharing that information among multiple staff is definitely not a best practice. Amen to that, Robert.” – Maria Varmazes (05:23)
3. Ericsson Data Breach: Lessons on Third-Party Risk
- Main Story (06:48–16:11):
Joe details a major breach affecting Ericsson’s US arm, exposing 15,000+ personal and financial records—not via direct compromise, but through a third-party service provider. Key points:- Attack vector: Vishing (voice phishing) led to credential compromise at the vendor.
- Detection lag: Incident occurred in April 2025, but Ericsson was notified only in November—months after initial discovery.
- Data Exposed: Names, addresses, dates of birth, Social Security numbers, IDs, financial and some health info.
- Mitigation: IDX identity protection services offered.
- Quotes:
- “[The breach] was not caused by compromising Ericsson, but rather by compromising a third party service provider.” – Joe Kerrigan (07:57)
- “If I was doing business with a company that took this long to notify me that they had breached my data, I would be nonplussed.” – Joe Kerrigan (10:16)
- “Good news is...there is currently no evidence indicating that the stolen data has been misused or publicly leaked. Yeah, that is cold comfort.” – Joe Kerrigan (14:41)
- Discussion Themes:
- Debate on what constitutes "reasonable" notification time.
- “Cold comfort” of delayed breach reports and the erosion of public trust.
- Frustration with repetitive, seemingly ineffective identity protection offers after data breaches.
4. Scam Trends: ‘You’ve Won a Prize’ & Meta’s Scam Fight
-
Segment (18:49–26:20):
A. Prize Scams (18:49–22:34):
- Explains classic scams where “winners” must pay fees or taxes to claim fake prizes.
- FTC advice: Real prizes are free; slow down and research; beware of pressure tactics.
- Humorous anecdote from Dave about meeting Bob Barker (20:01).
-
Notable Quote:
“Anyone who tells you to pay to get your prize is a scammer.” – Dave Bittner (21:45)B. Meta and Scam Ads (22:34–26:54):
- Meta (Facebook) claims removal of 159 million scam ads and shutdown of 11 million scam accounts in 2025.
- Critique: Efforts seen as insufficient—scam ad revenue is likely significant for the platform.
- Lawmakers push for advertiser verification.
- Cynical take on Meta’s integrity and sincerity in addressing scam/abuse.
-
Memorable Quote:
“Meta claims it’s fighting scams aggressively. My experience says otherwise...I’ll believe it when I see it.” – Dave Bittner (24:44)
5. Deepfakes and the Shrinking Trust Online
- Maria’s Deepfake Update (29:34–36:50):
- Recent deepfake videos targeting U.S.-Canada relations, with entire synthetic videos (e.g., “Warren Buffett” pontificating on geopolitics, all AI-generated, highly subtle, and monetized).
- 300,000+ views on one example; disinformation and financial motivation blend.
- Broader lament: YouTube increasingly flooded by “AI slop,” diminishing authentic, expert content, especially for hobbyists/tutorial seekers.
- Quotes:
-
“It is a self-perpetuating problem...the scammers have made money off your views.” – Maria Varmazes (32:43)
-
“YouTube has just become a pile of garbage in terms of content...If I want to know about something, I can't find a not obviously AI video.” – Joe Kerrigan (33:05)
-
Side Discussion:
- Prediction: "Economy of authenticity" will rise, with new value attached to genuine, human-created web content.
- Barriers to challenging incumbent tech giants (YouTube/Meta) remain high.
-
6. Grammarly and AI Voice Cloning Lawsuit
- Segment (37:31–39:17):
- Grammarly faces class action after offering to write “in the voice” of real journalists without consent.
- Debate over scraping versus developing an authentic, individual writer’s style.
- Quote:
“That’s actually a skill...not okay for someone to come in and say, we’re just going to steal that. Thank you very much.” – Maria Varmazes (38:08)
7. Catch of the Day: “Discount Elon Musk” Bot (40:40–45:47)
- Scambait Drama:
- Read-through of a ridiculous romance scam, featuring “Elon Musk” who immediately love-bombs and demands pictures from a dubious recipient (“Call me Jen!”).
- Illustrates how transparent and bot-like some social engineering attempts have become; humans still play along to expose their absurdity.
- Memorable Exchanges:
“Honey, I love you very much. Honey. I love you and I will love you forever.” – “Elon Musk” Scam Bot (42:09)
“Your love is literally based on nothing.” – Maria as “Jen” (45:08)
Memorable Quotes
- “It’s AI, so it’s fair [to summarize poorly].” – Maria Varmazes on AI-generated journalism (38:09)
- “LinkedIn is Facebook in a suit...it’s unhinged garbage from front to back.” – Joe Kerrigan (26:19)
- “The economy of authenticity...people are gonna be so hungry for authentic creators.” – Dave Bittner (35:53)
- “You don't have to get hacking if you've been hacking the whole time.” – Maria Varmazes (13:22)
Important Timestamps
- 00:44 – 02:11: Aggravated Identity Theft explained
- 02:11 – 05:38: Shared mailboxes, risks & licensing
- 06:48 – 16:11: Ericsson breach deep dive: third-party risk, notification delays
- 18:49 – 22:34: “You’ve won a prize” scams, FTC recommendations
- 22:34 – 26:54: Meta and scam ads: skepticism and regulatory debate
- 29:34 – 36:50: Risks and monetization of deepfakes on YouTube
- 37:31 – 39:17: Grammarly lawsuit and the question of AI mimicry
- 40:40 – 45:47: “Catch of the Day” – Elon Musk bot romance scam
Tone & Style
Conversational, irreverent, and informed, mixing personal anecdotes, technical expertise, and sharp skepticism toward tech platforms’ policing of scams. Humor is used to underscore, rather than minimize, the ongoing seriousness of social engineering threats.
Takeaways
- Third-party vendors remain a serious risk to organizations, with slow breach notifications compounding the problem.
- Scams are endlessly recycled and now more scalable thanks to AI, which is blurring the line between credible and fraudulent content.
- Trust in online platforms continues to erode as regulation lags and user vigilance is demanded.
- The arms race between AI-generated slop and authentic content is escalating, changing how knowledge and expertise are shared online.
- Law and policy are struggling to keep up with new dimensions of identity theft, disinformation, and AI mimicry.
