Podcast Title: Hacking Humans
Host/Author: N2K Networks
Episode: When AI Lies, Hackers Rise
Release Date: April 24, 2025
Description: Deception, influence, and social engineering in the world of cybercrime.
Episode Summary
In the April 24, 2025 episode of Hacking Humans, hosted by Dave Buettner, Joe Kerrigan, and Maria Varmazes, the discussion delves deep into the evolving landscape of cybercrime, particularly focusing on the interplay between artificial intelligence (AI) and malicious hacking activities. The episode is segmented into several key discussions, each highlighting innovative tactics used by cybercriminals and offering insights into emerging threats.
1. AI Hallucinations and Vibe Coding: A New Frontier for Cyber Threats
The episode kicks off with an exploration of AI hallucinations—instances where AI systems generate information that isn’t grounded in reality. Maria Varmazes introduces a story from Thomas Claiborne of lrege, discussing how AI's tendency to fabricate details can be exploited by hackers.
Maria Varmazes (07:06):
"AI tells you something that isn't true or references something that doesn't exist."
Joe Kerrigan adds an academic perspective, referencing neuroscientists who suggest that AI-generated inaccuracies should be termed confabulations rather than hallucinations.
Joe Kerrigan (07:21):
"They should be referred to as confabulations. Hallucinations come out of nowhere, out of whole cloth, and confabulations are based on a set of pre-existing facts."
The conversation pivots to vibe coding, a programming methodology where developers rely heavily on AI to generate code based on minimal prompts. Vibe coding shifts the programmer’s role from writing code manually to overseeing, testing, and refining AI-generated code.
Maria Varmazes (08:38):
"Vibe coding is an AI-dependent programming technique where the person describes a problem in a few sentences as a prompt to a large language model tuned for coding. The LLM then generates the software, shifting the programmer's role from manual coding to guiding, testing and refining AI-generated source code."
Dave Buettner elaborates on the implications of this shift, emphasizing how AI can both enhance productivity and introduce vulnerabilities. The hosts discuss how malicious actors can exploit AI hallucinations to create fake software packages—slop-squatted packages—that appear legitimate but contain harmful code.
Maria Varmazes (12:29):
"What Farras is pointing to here is that there is a growing concern that he has, and he's even seen it, where somebody has looked at the AI hallucinations and said, oh, this AI has come up with a package name that doesn't exist. Somebody else is going to try this. I'm going to go out and write that package. And that package is going to be malicious."
Joe Kerrigan (13:59):
"The AI agent makes up the name of a library."
This tactic involves AI-generated code referencing non-existent libraries, which hackers can then create to lure unsuspecting developers into installing malicious software. The hosts emphasize the critical need for vigilance among developers, especially those leveraging AI tools for coding.
Maria Varmazes (17:42):
"Because I'll tell you is I've sat down and prompted AI to write some code for me before. It comes up with some pretty good code pretty quickly."
Joe Kerrigan (18:06):
"You need to know what you're doing. You need to know about the library."
2. Smishing Scams Targeting Toll Road Users: Insights from Cisco Talos
Transitioning from AI-related threats, the podcast addresses smishing scams—a form of phishing conducted via SMS—targeting toll road users across eight U.S. states. Joe Kerrigan summarizes a comprehensive investigation by Cisco Talos, a renowned cybersecurity research team.
Joe Kerrigan (19:07):
"The Talos team, certainly highly respected when it comes to cybersecurity investigations, found that this campaign was taking place in at least eight different states."
The scams involve impersonating legitimate toll services like E-ZPass, sending fraudulent messages claiming owed payments, and directing victims to spoofed websites designed to harvest personal and financial information. The infrastructure supporting these scams includes typo-squatted domains and phishing kits sold through platforms like Telegram.
Dave Buettner (21:05):
"They think that the campaign might be using data from large public leaks, like the 2024 National Public Data Leak. But Talos hasn't found any direct evidence for that."
The hosts discuss the low-risk, high-reward nature of such scams, categorizing them as nuisance malware. Even though individual scams might involve nominal sums, their volume and ease of execution make them pervasive threats.
Maria Varmazes (23:07):
"What the concern is that, well, if they're ever going to get into developing programming languages or developing in a programming language, make sure that you're using a programming language that... You have to be vigilant."
3. In-Person Payment Scams Using Fake Banking Apps: A BBC Investigation
Another significant topic covered is the resurgence of in-person scams facilitated by sophisticated fake banking applications. Dave Buettner narrates a BBC report about scammers meeting victims face-to-face, utilizing counterfeit banking apps to deceive them into believing transactions have been completed successfully.
Dave Buettner (25:14):
"This scam is kind of old school, but it's got a new school twist with the fake apps."
Maria Varmazes shares distressing accounts of victims who believed they had received payments during in-person transactions, only to discover that the funds never transferred. These scams exploit the trust built during face-to-face interactions, making them particularly devastating.
Dave Buettner (30:00):
"I found it absolutely sickening that you could look someone in the eye, shake their hand and then rob them."
The discussion highlights the challenges law enforcement faces in tracking and prosecuting such scams, as well as the emotional toll on victims who lose trust in others.
Joe Kerrigan (34:04):
"Is it a better plan to say to someone, cash only? Yeah."
4. Listener Submission: A Template for Job Scams
Concluding the episode, the hosts review a listener-submitted scam example. John shares a smishing attempt where scammers impersonate a recruiting representative, offering lucrative yet suspicious job opportunities. The scam includes promises of high earnings, minimal work hours, and unrealistic benefits, designed to lure victims into providing personal information.
Dave Buettner (37:54):
"Hello, I'm Lena, a recruiting representative at Adjust in parentheses for some reason... We look forward to your response. So we're very impressed by you. Are you over 18?"
Maria Varmazes (38:01):
"This is obviously some kit for a job scam that somebody bought. And they just said, all right, I got it. I'm going to start sending out text messages right now."
The example underscores the importance of skepticism and verification when approached with unsolicited job offers, highlighting the automated nature of such scams.
Notable Quotes
-
Joe Kerrigan (07:21):
"They should be referred to as confabulations. Hallucinations come out of nowhere, out of whole cloth, and confabulations are based on a set of pre-existing facts." -
Maria Varmazes (12:29):
"What Farras is pointing to here is that there is a growing concern that he has, and he's even seen it, where somebody has looked at the AI hallucinations and said, oh, this AI has come up with a package name that doesn't exist. Somebody else is going to try this. I'm going to go out and write that package. And that package is going to be malicious." -
Joe Kerrigan (13:59):
"The AI agent makes up the name of a library." -
Dave Buettner (19:07):
"The Talos team, certainly highly respected when it comes to cybersecurity investigations, found that this campaign was taking place in at least eight different states." -
Dave Buettner (25:14):
"This scam is kind of old school, but it's got a new school twist with the fake apps." -
Dave Buettner (30:00):
"I found it absolutely sickening that you could look someone in the eye, shake their hand and then rob them." -
Joe Kerrigan (34:04):
"Is it a better plan to say to someone, cash only? Yeah." -
Maria Varmazes (38:01):
"This is obviously some kit for a job scam that somebody bought. And they just said, all right, I got it. I'm going to start sending out text messages right now."
Conclusion
In this episode of Hacking Humans, the hosts provide a comprehensive analysis of how advancements in AI are being co-opted by cybercriminals to execute more sophisticated and deceptive attacks. From exploiting AI hallucinations in software development to orchestrating mass smishing scams and revitalizing old-school in-person cons with modern tools, the podcast underscores the need for heightened vigilance and informed defense strategies in the face of evolving cyber threats. Listener submissions further highlight the real-world impact of these scams, emphasizing the human element in cybersecurity vulnerabilities.
Listeners are encouraged to remain cautious, verify the authenticity of digital interactions, and stay informed about the latest cyber threats to safeguard themselves and their organizations effectively.
