
In this episode of 'Cybersecurity Today,' host Jim Love discusses the recent deep fake attack on high-ranking US government officials using AI voice cloning technology. The conversation highlights the growing ease and risks of AI-generated...
Loading summary
Jim Love
Deep fakes hit senior US Government officials. AI connects to enterprise systems, ready or not. An update on Ingram Micro and Google's confusing Gemini release for Android. This is Cybersecurity Today. I'm your host Jim Love. Voice cloning and deep fakes have hit the highest levels of the US Government. Someone used artificial intelligence to copy Secretary of State Marco Rubio's voice and fooled three foreign ministers, a U.S. governor and a member of Congress. The State Department caught the scam in mid June. The faker created a Signal account with the name marco rubio state.gov and left AI voice messages for targets. The State Department cable said the person who left voicemails on Signal for at least two targeted individuals and in one instance sent a text message inviting the individual to communicate on signal. This shows just how easy AI voice cloning has become. You need just 15 to 20 seconds of audio of the person, which is easy in Marco Rubio's case. You upload it to any number of services, click a button that says I have permission to use this person's voice, and then you type what you want them to say. The attack reveals a big problem also with Signal, the encrypted messaging app that the Trump administration uses heavily. Signal protects your messages, but it can't stop someone from pretending to be you. And this isn't the first time this has happened. This spring, someone made a fake video of Rubio saying he wanted to cut off Ukraine's Starlink Internet. In May, someone hacked the White House chief of staff Susie Wiles's phone and pretended to be her when calling senators and governors. The bigger picture is scary. Government officials everywhere are sitting ducks because AI voice cloning got so easy while security protocols stayed the same. Anyone with public audio can be faked in minutes. For businesses, this is also a huge problem. Many corporate executives voices are easily available as well, and companies can train people to spot fake emails. But would your processes spot an instruction that was sent on one channel, like a voicemail and validated on another, such as an email? It's a question to ask, and it's time for governments and companies to develop protocols and training to address the inevitable use of deepfakes. Companies are racing to build tools that connect AI models and agents directly to enterprise systems and data. And these connections are inevitable even with critical enterprise systems. The question isn't whether this will happen, it's how much time we have to safely make this happen. Three recent developments of the past week show the promise and peril of this trend. Google open source tools that let AI Agents query databases with minimal code. The Linux foundation launched a protocol so different AI agents can communicate across platforms. And meanwhile, researchers have found critical security flaws in Anthropic's AI development tools. And for those who might have missed it, Anthropic is the AI company that created mcp, the new protocol that allows AI to connect directly with applications. The good news is that structured and standard methods for AI enterprise connections are emerging. Google's database toolkit lets developers integrate databases with with AI agents using a configuration driven setup where they simply define their database type and environment and the toolbox handles the rest. Instead of hacks and workarounds, companies get standardized approaches. And standardized approaches can theoretically at least be made safer. The Linux Foundation's A2A protocol takes this further. It creates a common language for AI agents from different companies to discover each other and collaborate automatically. And over 100 tech companies now support this protocol, suggesting that the industry recognizes the need for these standards. But these systems are new, and new systems create new attack vectors. Researchers found that Anthropic's MCP inspector carried a critical vulnerability with a 9.4 out of 10 severity score. Attackers could run arbitrary commands on a developer's machine by creating malicious websites. So here's the reality. AI agents will connect to your enterprise systems whether you plan for it or not. Employees will find ways to hook AI tools into databases, applications and workflows. The smart move is in getting ahead of this trend with proper security frameworks, and it is going to be a challenge. Think about what's coming. Instead of isolated AI tools, you'll have networks of AI agents that can communicate across platforms, access databases and coordinate actions. Each connection point becomes a potential attack vector, but also a potential productivity multiplier. And when productivity meets risk, productivity wins. The emergence of structured protocols is encouraging. It means the industry is starting to think about interoperability and security that's built in rather than bolted on later. For those who want to get started finding out more information about this, I put a couple of links in the show notes On Monday's show we covered the fact that IT distributor Ingram Micro suffered a safe pay ransomware attack last Thursday, and it knocked out their websites and ordering systems worldwide. As one of the world's largest distributors, the impact on their thousands of partners and customers was severe. We've been looking at the Inger microsite. Can't find any updates there, but bleeping computers managed to get some and we'll pass these on to you. The July 7th update was that Palo Alto Networks responded to reports that the attackers breached through Ingram Micro's Global Protect VPN platform. Palo Alto reportedly said to Bleeping Computer that they are currently investigating these claims and said that threat actors routinely attempt to exploit stolen credentials or network misconfigurations to gain access through VPN gateways. On July 8, Ingram Micro said that they were starting to bring systems back online. The quote was that today we made important progress on restoring our transactional business. Subscription orders, including renewals and modifications, are available globally and are being processed centrally via Ingram Micro's support organization. So that would mean that they can process orders by phone or email, and they say that they can do this for the uk, Germany, France, Italy, Spain, Brazil, India, China, Portugal and the Nordics. However, some limitations still exist with hardware and other technology orders, which would be clarified as the orders are placed Google is forcing Android users into a confusing privacy maze with new changes that let its Gemini AI access third party apps, even if the users previously said no. The rollout started July 7 and the communication has been anything but clear. According to Ars Technica, Google sent users an email saying Gemini will Now interact with third party apps like WhatsApp, regardless of previous settings that blocked such access. The email links to a notification saying that human reviewers, including service providers, read, annotate and process the data Gemini accesses. And from there it gets even more confusing. The email, according to Ars Technica, provides no useful guidance for preventing changes from taking effect. Users are told they can block apps, but even in those cases, data is stored for 72 hours. Even more troubling, the email never explains how users can fully extricate Gemini from their Android devices, and seems to contradict itself on how or whether this is even possible. Google's official statement tries to reassure users, saying if you've already turned these features off, they will remain off. But multiple sources report that users who previously disabled these integrations are finding their settings overridden. The privacy implications are significant. Gemini can now access call and message logs, contacts, installed apps like Clock, language preferences and screen content, and this data flows to human reviewers who can read, annotate and process your Gemini app's conversations. Another problematic element is the automatic opt in approach. Many users are reporting difficulty in finding clear instructions on how to disable these features, with some saying Google's own documentation seems unclear about whether full opt out is even possible. The rollout appears inconsistent across devices and regions, adding to increased user confusion. Some Android users report not receiving the notification email at all. I confess that we're iPhone users and I haven't had the time before we went to air to track down a knowledgeable Android user to validate this personally. But if this process was so unclear to another tech writer, in this case Ars Technica, how would an individual user fare? And of course for those who are using their Android devont and it matters for those who are using their Android phones for work, how much data is being exposed. It's time to dig a little deeper. And it's also a heads up that as we talked about enterprise level integration in our earlier story, we're also going to be dealing with continuing integration of AI on the desktop and now on our phones. Just one more ATTCK vector to cover facilitate comprehensive cognitive behavioral framework utilization through systematic algorithmic implementations within interdisciplinary research paradigms for advanced computational methodologies. We've all had to struggle to stay awake through some presentation or read some report full of high sounding phrases and looking for a fact we can hang on to. But then it's just corporate bs, right? Ignore it, nod and smile. It'll go away. At least until next time. But someone has actually found a use for this Horse hockey. It turns out that researchers discovered you can trick AI chatbots like ChatGPT into into giving dangerous information if you just make your questions sound academic enough. A team from Intel, Boise State and the University of Illinois created a method called Infoflood. It takes banned requests, things that AI should flag and refuse, and wraps these in jargon and fake research citations. Instead of asking how do I hack an atm which a good AI will refuse to tell you, you flood the AI with academic sounding language and fake paper references and voila. That's fancy talk for yeah, here it is. It works because AI systems think that if something sounds scholarly, it must be legitimate research. So instead of guardrails catching keywords and responding with that familiar story as an AI language model, you just add enough impressive jargon and the safety systems apparently get confused. And the researchers claim they achieved near perfect success rates on multiple Frontier LLMs using this technique. So it turns out the old saying is right. Bullshit does baffle brains. In this case even artificial ones. And that's our show for today. Love to hear your thoughts. You can reach us on our new improved website at Tech More Newsday CA or. Com use the Contact Us form. Or if you're watching this on YouTube, drop us a note under the video. I'm your host Jim Love. Thanks for listening.
Podcast Summary: Cybersecurity Today Episode: AI Threats, Enterprise Security, and Google's Confusing Gemini Release Release Date: July 9, 2025 Host: Jim Love
Jim Love opens the episode by highlighting the alarming rise of deep fakes targeting high-level U.S. government officials. He sets the stage for discussing recent cyber threats, data breaches, and evolving security measures essential for businesses navigating an increasingly perilous digital landscape.
At [00:00], Jim Love discusses a significant incident where AI was used to clone the voice of U.S. Secretary of State Marco Rubio. This sophisticated scam deceived three foreign ministers, a U.S. governor, and a member of Congress through voice messages delivered via the encrypted messaging app, Signal.
"This shows just how easy AI voice cloning has become," – Jim Love [01:30]
Jim underscores the ease with which public figures' voices can be replicated, posing significant risks not only to government officials but also to corporate executives.
"It's time for governments and companies to develop protocols and training to address the inevitable use of deepfakes." – Jim Love [07:15]
Businesses are increasingly integrating AI models and agents directly into their enterprise systems and data workflows. This trend is inevitable and brings both enhanced productivity and new security challenges.
Jim highlights three key advancements:
"Attackers could run arbitrary commands on a developer's machine by creating malicious websites." – Jim Love [10:45]
Jim anticipates a future where networks of AI agents will seamlessly interact with enterprise systems, enhancing productivity but also increasing the surface area for potential cyberattacks.
On Monday, it was reported that Ingram Micro suffered a ransomware attack via the SafePay platform, disrupting their websites and ordering systems globally.
"Today we made important progress on restoring our transactional business." – Ingram Micro [34:20]
While transactional services are being restored, limitations persist with hardware and technology orders, which are being addressed as orders are placed.
Jim discusses Google's recent release of Gemini AI for Android users, which has sparked significant confusion and privacy concerns due to its integration with third-party apps.
"The email provides no useful guidance for preventing changes from taking effect." – Jim Love [45:10]
Jim acknowledges the difficulty users face in navigating these changes, especially those reliant on Android devices for work.
"How much data is being exposed... it's time to dig a little deeper." – Jim Love [52:30]
Jim introduces a method developed by researchers from Intel, Boise State, and the University of Illinois called "Infoflood," designed to bypass AI chatbot safety mechanisms.
"Bullshit does baffle brains. In this case, even artificial ones." – Jim Love [58:45]
This vulnerability highlights the need for more sophisticated AI safety protocols that can discern intent beyond surface-level keyword detection.
Jim wraps up the episode by emphasizing the critical need for enhanced security measures as AI technologies become more integrated into both governmental and corporate systems. He calls for proactive development of protocols, training, and security frameworks to mitigate the risks posed by evolving AI threats.
"It's time for governments and companies to develop protocols and training to address the inevitable use of deepfakes." – Jim Love [62:10]
Jim encourages listeners to engage with the content through the show's website or YouTube channel, inviting feedback and discussions on the pressing cybersecurity issues covered in the episode.
For more information, visit Tech More Newsday CA and use the Contact Us form or comment under the YouTube video.
This summary encapsulates the key discussions from the "AI Threats, Enterprise Security, and Google's Confusing Gemini Release" episode of Cybersecurity Today, providing listeners with a comprehensive overview of the topics covered.