Transcript
Grugq (0:00)
Foreign.
Tom Uren (0:05)
This is Tom Uren, I'm here with the Gruk and another between two nodes discussion. G', day, how are you?
Grugq (0:11)
G', day, Tom. Finding yourself very well?
Tom Uren (0:14)
This week's edition is brought to you by Socket Socket helps developers manage all their supply chain dependencies. Find them@ Socket.dev so this week Google released a new AI threat report. It went through basically that adversaries or cyber criminals, threat actors are using AI at pretty much every step in the, I don't know, the cyber criminal life cycle. So way back in November, which is a lifetime ago in terms of AI, there was a similar report by Anthropic and they said they had discovered a Chinese threat actor who was using Claude to organize a. I guess it was a campaign, but it was like Claude was doing all the grunt work and it would basically come back to the whoever was in charge and say, you know, here's what I found. Do you want me to go back out? It was like the worker bees and management was sitting back seeing the results of Claude. And at that time I wrote that AI powered cyber espionage will favor China. I think that was maybe, that was the title of my piece. And that was because if you're going to YOLO it, at the time it seemed like, you know, AI would be great for that, you can do it, get a lot more done, but the risks are higher. And that seemed to suit some people
Grugq (1:39)
with a high risk appetite.
Tom Uren (1:41)
That's right, Right. Yeah. And my thinking was this is something that organization like NSA or ASD or the Five Eyes won't do because they want to get things right, not screw up and get outed. There's a very high premium on correctness.
Grugq (2:00)
Right.
Tom Uren (2:01)
So not that long ago I had an sponsor interview with Dan Guido, who's CEO of Trail of Bits, which is a, a sort of specialist consulting cybersecurity company perhaps would be one way to describe them. And he was talking about how they were using AI and in fact it was to add a lot more robustness to processes to actually make them more secure. And this is my take anyway. The way that they were doing that was breaking down longer chains of work into smaller processes that you could be very confident that an AI machine would do it correctly because it wasn't go and write me some malware. It was do this little thing and then we'll have a check that it's right or wrong. And so for example, they were using it to write test cases that they'd never written before because it was just way too much work.
