
“A.I. companies are slowly and haltingly learning to speak the language of Donald Trump.”
Loading summary
Sierra Ad
This podcast is supported by Sierra. We've all been there. Your flight was canceled, and everyone is trying to rebook at the same time.
Customer Service Bot
Please hold. Estimated wait time is 25 minutes.
Sierra Ad
Sierra is different. We build AI agents that talk directly to your customers so you can say goodbye to hold times and chatbots. Always friendly, always helpful, always ready. Visit Sierra AI to learn more. That's Sierra AI Casey, I have a.
Kevin Roos
Bone to pick with you.
Casey Nealman
What's that? What'd I do?
Kevin Roos
So, on Saturday, as you know, we had a birthday party at our house.
Casey Nealman
Wonderful birthday party.
Kevin Roos
My son.
Casey Nealman
Yeah. And it was also a housewarming party.
Kevin Roos
Housewarming party. And you and your boyfriend came. Lovely to see you there. Thanks for coming. But you brought him this present. We specifically said no presents.
Casey Nealman
You did say that.
Kevin Roos
And you brought him this presentation that was called the Dino Truck.
Casey Nealman
Yes. And here's why. Because I know that your son loves trucks. And I thought, what is the best kind of truck I could think of? And that would be a truck that was also a dinosaur that was full of dinosaurs. And so that's what I got him.
Kevin Roos
Yes. It's very, like, pimp my ride coated because it has. It's a dinosaur truck that contains within it 12 other dinosaur trucks.
Casey Nealman
That's right.
Kevin Roos
And you sort of like, assemble it all together. But my son has not stopped playing with it. He absolutely loves it. And as a result, about twice a day, I now step on a very painful dino truck that has been left somewhere on my house. So. Oh, no, he's loving it. I am not.
Casey Nealman
I mean, when I think it was the best kind of gift I could get for the Roos family. It is something that your son enjoys and that causes you physical pain. So I think that was a slay. Mission accomplished.
Kevin Roos
I'm Kevin Roos, a tech columnist at the New York Times.
Casey Nealman
I'm Casey Nealman from Platformer, and this is Hard Fork. This week, America is building an AI action plan. We'll tell you how tech companies are trying to exploit it. Then Columbia University sophomore Roy Lee joins us to talk about the tool he built to help software engineers cheat their way through job interviews and why he might get kicked out of school over it. And finally, the hot mess express is once again rolling into the station. Foreign.
Kevin Roos
I have an action plan for today's episode.
Casey Nealman
Okay, let's hear it.
Kevin Roos
I want to talk about action plans. So, Casey, as you know, because you wrote about it this week, there have been these AI action plans that all the big AI companies and think tanks and nonprofits have been submitting to the Trump administration over the past couple of weeks.
Casey Nealman
Yes. There was the Paris AI Action Summit, at which no action was taken or really even proposed at. Then the White House came forward and said, we're going to make our own action plan. And why don't you, you companies, and anyone else who wants to make a public comment go ahead and tell us what you think we should do.
Kevin Roos
Yeah. So these kind of public comment periods are not unusual. Agencies of the government sort of open themselves up for submissions from the public all the time on various issues. But this one caught our eye because it was related to AI and it was essentially the Trump administration trying to figure out what to do about AI and the potential that AI is going to accelerate during the four years Donald Trump is in office.
Casey Nealman
Yes. I think that's how the Trump administration saw it. And I think for the big AI companies, Kevin, it was really a chance to present the President with a list of their absolute fondest wishes and dreams for what the best possible deal they could get from the government would look like.
Kevin Roos
Yes. So I think there's some interesting stuff in them, but I also think there's kind of a broader interesting story about how the tech companies want or don't want government to be involved in helping them build and manage these very powerful AI systems.
Casey Nealman
Yes. Let's get into it.
Kevin Roos
Okay. But first, because this is an AI related segment, we should make our standard disclosures. Do I switch it up this week? Do you want to do mine and I'll do yours?
Casey Nealman
Yeah, sure. The New York Times is suing Microsoft and OpenAI over alleged copyright violations.
Kevin Roos
Correct. And Casey's Manthropic works at Anthropic.
Casey Nealman
That's right.
Kevin Roos
Okay, so you wrote about these submissions this week. Where do you want to start?
Casey Nealman
Well, let's start at maybe some of the things that are a little bit less controversial. Right. I think there are some pretty good ideas in these action plans, and I actually think the Trump administration will probably follow through on them. So, for example, they talk about wanting to expand the energy capacity that we have in the United States so that we can have the power that it will take to do everything with AI that we want to. They also talk about encouraging the government to explore positive uses of AI. Right. Potentially deliver better services to citizens. That would be good if that happened. So there's a lot in these documents about that. But once you get beyond that surface layer, Kevin, there is a lot of essential, essentially what these companies have always wanted the government to tell them, and they are now finally Getting a chance to say, hey, please, please, please do this.
Kevin Roos
And what are those things?
Casey Nealman
So, for example, they are really, really excited about the idea that Donald Trump might declare definitively that they have carte blanche to train on copyrighted materials. Now, this is, of course, at the heart of the times lawsuit against OpenAI. But it's not just OpenAI that wants the green light to do this, right? AI labs are under similar legal threat. So it's in Google's AI action plan. It is in Meta's AI action plan. In fact, Metta says that Trump should unilaterally, without Congress, just issue an executive order and say, yeah, it's okay for these AI labs to train on copyrighted material. Go nuts. OpenAI, I think, had a frankly ridiculous statement in their AI action plan, which is that if Trump does not do this, if he does not give AI companies carte blanche to train and copyrighted materials, we will immediately lose the AI race to China and it will just be deep. Seek everything from here on out.
Kevin Roos
Huh? I mean, obviously they have interest in making that case and having the Trump administration give them sort of a free pass, but can they actually do that? Like, could Donald Trump issue an executive order tomorrow and say there's no such thing as copyright anymore when it comes to the data used to train large language models?
Casey Nealman
Well, Kevin, lately the Trump administration has been issuing a lot of executive order orders that people have said, well, hey, you're not allowed to do that. That's actually not constitutional. And yet he keeps doing it. And some of these things have been struck down by the courts and some haven't been. And there seems to be a kind of flood the zone strategy where we're just going to sort of do whatever we want. And the courts may undo some of it, but they're probably not going to undo all of it. So where would copyright executive order fit into that? I don't know.
Kevin Roos
Yeah, I mean, my hunch is that this will not happen via executive order, that it will be left up to the courts to decide. But, yeah, I mean, it's certainly in their interest to argue that this all should be allowed in kosher and to sort of preempt any potential litigation against them. Was anyone opposed to that idea?
Casey Nealman
Yes. So a group of more than 400 Hollywood artists, including Ben Stiller, Mark Ruffalo, Cynthia Erivo, and Cate Blanchette, signed a letter saying, hey, do not grant an exemption from copyright law to these AI labs. And their argument was essentially, America has a lot of cultural leadership in the world. You know, it's like so much global culture is downstream of America, American culture. And they said if you create disincentives for us to create new works because we can no longer make any money from it economically, because AI just decimates our business, we are going to lose that cultural leadership. And so I would actually call on Ben Stiller, Mark Ruffalo, Cynthia Rivo, and Cape Blanchard to come on the Hard Fork podcast and tell us more about that. We'd love to meet you and hear your stories.
Kevin Roos
Yeah, I would call on them to, to like, frame their opposition in the form of like a, like a musical. CYNTHIA Rivo, in particular, I have a proposal for the sort of showstopper tune of that musical.
Casey Nealman
Have you written it?
Kevin Roos
Yeah.
Casey Nealman
Okay.
Kevin Roos
It's called Defying Copyright.
Casey Nealman
Oh, boy. Wow. You didn't even try for a rhyme. You know, when it comes to copyright violations, Cynthia Erivo is decrying depravity.
Roy Lee
And.
Casey Nealman
That'S how you do it.
Kevin Roos
KEVIN okay, back to the serious issues in these AI action plans.
Casey Nealman
Casey yeah, there's another big plank that gets repeated in these submissions, Kevin, and that is this idea that these companies do not want to be subject to a thicket of state laws about AI, Right?
Kevin Roos
Yes. Basically, what the AI companies don't want is in the absence of strong federal regulation on AI, they don't want California to pass a bill governing the, the use and training of large language models, Texas to pass a bill, Florida pass a bill, New York to pass a bill. They don't want to have to kind of go through 50 states worth of AI regulations and making sure that all their models comply with all the various state regulations. So they have wanted for a long time and are now making explicit their desire for a sort of federal law or statute or executive order that would essentially say to the companies, you don't have to pay attention to any state laws because the federal law will supersede all that.
Casey Nealman
Yes. And in particular, Kevin, they are worried about state laws that would make it so that these companies could be held legally liable in the event that their products lead to great harm. Right. There was some discussion about this in California last year with a Senate bill that we've talked about on the show. And there's a lot of fear that other states might take a similar approach. And so this plank in these plans, Kevin, where these companies are saying, we don't want a thicket of state laws, it kind of works in a couple different ways. I can understand why they don't want to have to have a different version of ChatGPT in 50 different states that would obviously be very like resource intensive and annoying at the same time. These companies know full well the country they live in. They know how many tech regulations we passed in this country in the past 10 years. There is only one of them, and it was to ban TikTok. And it turns out that even when you pass a law banning TikTok, TikTok doesn't get banned. So I think that there is a bit of cynicism here and that they're saying, oh, please, please, please let there not be any state laws, just pass a federal one. They know that there is very little likelihood that that is going to happen soon. And so that in the meantime they can just sort of operate under the status quo where they don't have direct legal liability for any bad outcomes that might arise from a future large language model.
Kevin Roos
So I went through a lot of these proposals and I think there's some interesting stuff in them, sort of around the edges. There was a lot of talk about the security of these models and trying to sort of harden the security of the AI companies themselves so that, for example, you know, foreign spies aren't stealing the model weights and sending them to one of our adversaries or things like that.
Casey Nealman
By the way, I love that word. You know, it's oh, we have to harden our defenses. We have to make them so hard. We have to harden our posture. I don't know when we started saying that, Casey.
Kevin Roos
This is a family show.
Casey Nealman
Very evocative is all I'm saying. Anyways, go on.
Kevin Roos
So there was some sort of small bore stuff in there that felt interesting.
Casey Nealman
Small boar, by the way, two words often used in reviews of this podcast. I don't know why I keep interrupting you. I'm just trying to get the energy level up. But we're doing great. That's fine. All right, tell us more.
Kevin Roos
So some of the plans contain some weird, interesting ideas. Like for example, in OpenAI's proposal, there's this idea that 529 plans, which are the plans that parents can start to save for their child's college education, should be expanded so that they can be used to pay for things like getting an H Vac technician credential. Because they say, you know, we're going to need a lot of H Vac technicians in all these data centers. They're going to power all these AI models. And right now, you know, kids are being incentivized to go to college and get four year degrees and, you know, various subjects that may not be that relevant, but, like, we're definitely going to need a lot more H vac technicians. Is that going to change the world overnight? No. Is the Trump administration going to take that seriously? I have no idea, but that's the kind of thing that I was surprised to see in there.
Casey Nealman
Yeah.
Kevin Roos
But what I found more interesting was what was not in these proposals. Right. These companies and the people who lead them have big radical ideas about how society will change in the coming years as a result of powerful AI systems. Sam Altman has been interested for years in Universal Basic Income. He funded a Universal Basic Income experiment to try to figure out what an economy after AGI would look like and how we would provide for people's basic needs. There are executives that are trying to solve nuclear fusion to power the next generation of AI models. There are people who want to do things like worldcoin, which Sam Altman also funded, to sort of give people a way to verify that they are humans. You can imagine a world in which the AI labs were saying to the government and the Trump administration, hey, we have all these ambitious plans. We want your help. Please help us come up with a UBI program that might make sense for people who are displaced by AI. Help us come up with some kind of national proof of personhood scheme or help us build fusion energy. But they're not asking for that stuff. What they're asking for instead is basically, leave us alone and let us cook.
Casey Nealman
Yeah.
Kevin Roos
And it really makes me think that these labs have decided that it would be more trouble to have the government in their corner actively helping them than it would help.
Casey Nealman
Yeah.
Kevin Roos
And so my read of these proposals is that they are trying to give the government some stuff that they can do that will make them feel like they're helping and sort of clearing the path for AI, but that they're not calling for any kind of, like, federal Manhattan Project for AI. Because my sense is that they just think that would be inviting trouble.
Casey Nealman
Yeah. And, I mean, they might be right about that. Right. I'm not sure exactly what the government could or should be doing to, like, help OpenAI make a better version of ChatGPT, but, you know, I think I would go a step further than what you said, Kevin, because it isn't just leave us alone. They're really telling the government, leave us alone, or else. There is a boogeyman in these AI action plans. And the boogeyman is Deep Seek. So Deep Seek, of course, is a Chinese company that emerged with a model called R1 earlier this year that shocked the world with how much it had caught up to the state of the art and has really galvanized the attention of Chinese leaders around the possibilities of what AI can do in China. And so when you read the. The OpenAI and the Meta action plans in particular, they're saying, look at Deep Seek. China is so close to us. You really need to let us do exactly what we want to do in the way that we are already doing it, or we're just going to lose to China and it's all going to be over for us.
Kevin Roos
Yes, yeah, yeah, I noticed that too. And I think we've seen that being telegraphed at things like the Paris AI Summit, where there was a lot of talk about China and foreign adversaries that were catching up to the state of the art AI technology. But to me, that feels like very calculated. Like, that is the role that the AI companies want the government to play, other than just getting out of their way. They also want them to hobble China and make it hard for China to sort of catch up to them in the state of the art. And there's a genuine read of that that is like, we're worried about Chinese companies getting to something like AGI before Americans, and what happens if their values rather than ours are embedded in these systems and they just use them for surveillance on their own citizens and things like that. The cynical read is like, we have this new competitor and we would like the US Government to step in and make things actively harder for that competitor.
Casey Nealman
Yeah. And look, I mean, I think there are reasons to be worried about what an adversary could do with a really powerful AI. So I don't want to dismiss these concerns completely, but I do feel like some of these labs are trying to use the specter of China in a pretty cynical way. My favorite story about this issue, Kevin, does have to do with Meta. So, you know, Meta writes in its proposal to the government a lot about Deep Seek. And meta's number one priority in its action plan is that it continues to be able to develop what it calls open source AI. Now, meta's AI is not actually open source. There are a lot of restrictions on how you can use it. Most people would call it open weights instead of open source, because you can download the model weights, but not the actual source code. Okay, we're a little bit in the weeds, but I do feel all our.
Kevin Roos
Listeners have fallen asleep.
Casey Nealman
Wake up. Okay, so let's just wake up by saying that Meta says to the government, look at what Deep Seek is doing. If you don't let us Develop in an open source way. Deep Seek's own sort of open weights approach could spread all across the world and it will have these authoritarian values embedded in it and we will just sort of lose out on the opportunity of a lifetime. Why is that funny to me? Well, Kevin, it's because in November, Reuters reported that Chinese researchers had used Meta's LLAMA model to create new applications for the military.
Kevin Roos
Oh boy.
Casey Nealman
So, you know, and look, does that mean that China used LLAMA to build a giant space laser that's gonna vaporize the Easter seaboard? No, but it does suggest to me that this idea that we have to release, quote, open source AI in order to save us all is probably not the right answer.
Kevin Roos
Yeah, and if anyone from the Chinese military is listening to Hard Fork, please don't develop a space laser using llama. That seems scary.
Casey Nealman
That's our action plan. No space lasers.
Kevin Roos
So before we wrap up talking about these action plans, I want to point to a few good ideas that I saw in them. Many of them came from groups other than the big AI labs, but I thought there was some interesting sort of off the wall stuff that I hope the Trump administration is paying attention to this. One of them was this proposal from the ifp, the Institute for Progress, which is a pro technology progress think tank. IFP says, you know, we're going to need a bunch of data centers and a bunch of energy sources to power those data centers. But all that requires building physical infrastructure. And it can be quite slow to build physical infrastructure in parts of the country due to things like environmental regulations and zoning and things like that. So they propose creating these things called special compute zones, where you would essentially be able to build, in a much less restricted way, the infrastructure to power advanced AI systems.
Casey Nealman
That's actually what I call my office is a special compute zone. When I see like guests going in there, I say, hey, get out of there. That's a special compute zone.
Kevin Roos
Yeah, so that was one interesting idea from the IFP proposal. I also.
Casey Nealman
Did the Institute Against Progress have any interesting ideas you want to share?
Kevin Roos
Well, there isn't an Institute Against Progress, but there are some organizations like the Future of Life Institute that are much more concerned about the development of these powerful systems. This is one of these organizations that's been around for a while. It's concerned with things like existential risk and runaway AI. And so one of their ideas that they put in their proposal was that all AI models of a certain size and power should have kill switches on them. Basically. Basically, in order to release one of these things, you should have to build in a way that an engineer can shut it down. And the way that they pitched this to the Trump administration was this is a way to protect the power of the American presidency. Right. As the president, you wouldn't want some AI system going rogue and becoming more powerful than you or allowing another world leader to become more powerful than you. So you want to kill switch on these things in order to protect the authority of the American president.
Casey Nealman
Yeah. And, you know, one of the most interesting things about all of these plans, Kevin, is the way that the authors have to contort themselves to try to talk about AI in a way that the Trump administration will actually listen to. Right. Vice President Vance in Paris in February says explicitly that the AI future is not going to be won by hand wringing over safety. Right. They hate the term AI safety. And so, in fact, when you look at the proposals of the major lab, they basically don't use the word safety at all. Except maybe, you know, one time I actually was doing that, like command F to try to find inst. Safety in these plants. You won't find it there. And so they have to sort of contort themselves. You know, in Anthropic's policy, it was almost like they were like hiding medicine inside of peanut butter and feeding it for a dog. Because instead of talking about safety, they would talk about national security, which is just another way of talking about AI safety. But actually, a lot of their proposal is about how can you build these systems safely. It's just that they're saying, you know, there's a national security implication. Yes.
Kevin Roos
So I think if we zoom way out from the specifics of these proposals, the two things that I want to convey about this process. One is that the AI labs mostly want government to leave them alone. The second thing is that I think the AI companies are slowly and haltingly learning to speak the language of Donald Trump. And this is their sort of first major public attempt to talk to the Trump administration in the way that it wants to be talked to. About. About how to harness the power of AI for American greatness or whatever.
Casey Nealman
So I have a slightly darker view of this, which is that the Trump administration has essentially already told us its AI action plan, which is go faster, Beat China. Right. That is the plan. And when given an opportunity to say, what do you think the United States should do? The biggest AI companies all looked around and they said, we should go faster and we should beat China now. Now, if it happens that the United States is able to build a very powerful and very benevolent AI and somehow, you know, create and promulgate democracy around the world, then, okay, that's great. But I think that there is a risk that this leads us into some sort of conflict or that by going very fast, we wind up making a lot of mistakes and we're at a higher risk of creating systems that we cannot control. So if you are, you know, in your cards this morning listening to us, wondering, why did they talk so much about these plans? This is the reason why, to me is that this feels like an inflection point where some of the most consequential figures governing the development of AI had a chance to say we should be really careful and thoughtful about this. And they mostly did not.
Kevin Roos
Yeah, I think that's a really good point. Kasey, what is our action plan? Because we have to be part of the solution here.
Casey Nealman
Two words. Underground bunker. I'm not telling you where it is, but it's under construction. How about you, Kevin?
Kevin Roos
I can't do better than that. That's good. Can I have a spot on your bunker?
Casey Nealman
Absolutely. There will always be a spot for the Roos family in the hardware bunker.
Kevin Roos
Okay, that's very sweet. Thank you. We're not bringing the dino truck.
Casey Nealman
When we come back, the college sophomore who has a cheat code for Leetco.
Sierra Ad
This podcast is supported by Sierra. We've all been there. Your flight was canceled and everyone is trying to rebook at the same time.
Customer Service Bot
Please hold. Estimated wait time is 25 minutes.
Sierra Ad
Sierra is different. We build AI agents that talk directly to your customers so you can say goodbye. Goodbye to hold times and chatbots. Always friendly, always helpful, always ready. Visit Sierra AI to learn more. That's Sierra AI.
Molly Wood
This podcast is supported by Worklab. Why should you listen to the Worklab podcast from Microsoft? Because it delivers actionable insights on how business leaders can leverage AI to access untapped value, turbocharged decision making and sharpen their competitive edge in a world of rapid change and economic uncertainty. In each episode, host Molly Wood has an illuminating conversation with a thought leader who has a vital perspective on AI and the future of work. Find the knowledge you need on Worklab. That's W O R K L A B. No spaces. Available wherever you get your podcasts.
Kevin Roos
Well, Kasey, we've got a doozy of a story this week and an interview with a real live member of Gen Z. Yeah.
Casey Nealman
And we are excited to talk to this one. This is a controversial story, Kevin, but one that we've that tells us a lot about the state of the world.
Kevin Roos
So today we are talking with Roy Lee. He is a sophomore at Columbia University.
Casey Nealman
For now.
Kevin Roos
For now, for at least the next couple of days. And he has gotten a lot of attention in recent days for something that he's been doing to apply for jobs in the tech industry.
Casey Nealman
What has he been doing, Kevin?
Kevin Roos
So Roy has developed a tool called Interview Coder that basically uses AI to help job applicants to big tech companies cheat on their interviews.
Casey Nealman
Yeah.
Kevin Roos
So in a lot of tech interviews, they do these things called leetcode problems, where basically the recruiter or the person who's supervising the interview from the tech company will watch you kind of solve a tricky computer science problem, and they'll do this remotely. And so Roy had this idea, well, these AI systems are getting quite good at solving these kind of problems. What if you could just kind of like have the AI running in the background telling you how to. To solve the problem, and you could kind of make that undetectable to the company.
Casey Nealman
Yeah. And to prove that this work, Roy applied for jobs at several big companies, including Amazon, and he says wound up getting offers from all of them after using this tool and after he began promoting this story online, well, that's when all hell broke loose.
Kevin Roos
Yeah. So he has become sort of a villain to a lot of tech employers and people doing these kinds of interviews, but he's become a hero to a bunch of younger programmers who think that these practices, these hiring tests, these puzzles that you give people when they're looking for jobs are outdated and that they need to be sort of exposed as being bad and wrong and that we need to come up with something better to replace them.
Casey Nealman
Yeah. And Kevin, I am sure that some listeners are going to hear this segment and they are going to email us and they are going to say, shame on you. Why are you giving this guy a platform? We shouldn't be rewarding people for cheating. But I have to tell you, as we sat with it, we thought, this is a story that tells us a lot about the present moment. The nature of software engineering is changing, nature of hiring is changing. What should employers be looking for and how should they test for it? These questions are getting a lot more complicated as AI improves. And Roy's story, I think, illustrates how quickly things are changing in a way that is just honestly worth hearing more about.
Kevin Roos
All right, well, with that, let's bring in Roy Lee. Roy Lee. Welcome to Hard Fork.
Roy Lee
Hey, excited to be here.
Kevin Roos
So where are we finding you today? It looks like you're in a dorm of some kind.
Roy Lee
Yeah, yeah, I'm still in my Columbia University dorm at the moment, possibly for.
Casey Nealman
Not too much longer. Is that right?
Roy Lee
Yeah, yeah. I'm waiting on a decision to hear if I'm kicked out of school or not. So this might be my last few days.
Casey Nealman
And what's the over under on whether you get kicked out or not? From the facts of the case, I would say it's not looking good for you.
Roy Lee
Yeah, yeah, it is not looking too good for me. But strangely enough, I've had some pretty powerful people message me and say, hey, if they try to do anything, then just let us know. So, yeah, both worlds are in the realm of reality.
Casey Nealman
Wow.
Kevin Roos
So I want to get to all the disciplinary drama, but I want to actually take us back in time to when this all started for you. When did you get the idea for this tool, interview coder? And what problem were you trying to solve?
Casey Nealman
Solve?
Roy Lee
Yeah. So I don't know how familiar you guys are with software engineering, but for about two decades now, there's a technical sort of interview that happens that's called a leetcode style interview. And it's essentially an interview where they'll ask you a riddle. And these types of riddles sort of problems are found on a website, leetcode.com, and you're given 45 minutes. And the task here is to sort of have seen the problem before, solve the problem, and be able to regurgitate the memorized solution without acting like you haven't seen the problem before. So it's pretty much a really ridiculous system and type of interview. And every single software engineer out there sort of knows it. And everyone if you want a job that pays a reasonable salary, then you're kind of forced to go through this gauntlet of spending a couple hundred hours on this website memorizing a bunch of riddles. And that's just like a gigantic net negative for society. I myself went through the gauntlet. I grinded the website for probably up until I was in the top 1% of competitive ranked users on the website. So it was just a gigantic waste of time. I spent 600 hours of my life memorizing riddles, when in reality I should have been programming. And as soon as I kind of developed the balls to kind of do something, I just realized, hey, there's something that can be done here. This is a very easy solution. This type of interview is already being gained by tools like this that exist. It just takes someone to kind of make it really public, make a scene out of it, and show big tech. Hey, you guys need to Fix it because it's just not working.
Casey Nealman
So you say you spent hundreds of hours on this web website solving these rills. I'm curious if you feel like it made you better at coding. Like, my, my guess would be is like, if you're truly in, like, you know, the, the top, you know, 1% of people who are using this website to solve problems, it would have made you pretty good at being a software engineer.
Roy Lee
There might have been utility in maybe solving the first 20 questions. Maybe like the first 10 hours on the website might have had some utility, but after that, it doesn't really help you at all. The types of problems and the type of thinking that you're expected to perform month while doing these questions, it's just you're never ever going to use it at a job.
Casey Nealman
All right, so you get very frustrated with leetcodes. You start thinking about what you want to do next and tell us the moment that you decided to become the joker.
Roy Lee
Yeah, so during the recruiting process, my interest in entrepreneurship was growing. And at a certain point, it kind of got to a point where I realized like, hey, no matter what, I'm only going to end up at a startup. And I kind of have the balls to cut off all these bridges now with big tech companies. And as soon as I developed that mindset, I realized that, that, hey, doing this thing is not actually going to ruin my future as much as I think it will. And in that case, it just becomes a super viral thing that we know will go viral.
Kevin Roos
So tell us about the thing. Tell us about the tool that you built and how it works.
Roy Lee
Yeah, so really core level, it's a desktop application that sits. It overlays on top of all of your other applications and is completely invisible to screen share. The technology is actually very, very simple. You just take a screenshot of the screen and ask ChatGPT, hey, can you solve the question you see on the screen and it spits out the response. But what we've really done technically is that make it undetectable to the interviewer. There's a translucent overlay so it doesn't look like your eyes are moving or you're looking at another screen at all. There's a movable window you can overlay directly on top of your code. The cursor doesn't lose focus, and there's just a lot of bells and whistles we've used to make it completely undetectable that you're actually using something at all.
Kevin Roos
So let me get a sense of how this actually works in practice. So during an interview for a programming job, you would be given a leetcode problem to solve and then you would be on a video call with someone, a recruiter from the company who's watching you. You solve the problem. Is that how these work?
Roy Lee
Yeah, that's exactly it.
Kevin Roos
And so you developed a tool to essentially allow you to have AI solve this problem for you while not tipping off the person on the other end of the video call that you're using AI?
Roy Lee
Yeah, yeah, yeah, that's how it works.
Casey Nealman
And am I right that you used a prototype of this when you were going through your own interview process with Amazon?
Roy Lee
Yeah, yeah. It wasn't just Amazon. I spent the entire recruiting season figuring out how to make a perfectly undetectable application. I trialed, ran it with companies like Meta, Capital One, TikTok, and the Bell of the ball was Amazon. That was sort of the most well known thing with the most annoying recruiting process. And I just knew that if I recorded the entire process then this would blow up.
Kevin Roos
And how did your tool do?
Roy Lee
Yeah, I mean it completely one shot. It like we live in an age where AI exists, programmers are going to use AI and AI is extremely good at these sorts of riddle type problems.
Casey Nealman
Can I just ask about what is your emotional experience of this time? You are walking into like several lion's den ends. You're essentially misrepresenting yourself as an earnest job candidate. Your whole role is essentially to like gather content that can then be used to like repurpose to promote your startup. What like, were you nervous during this time? What were you feeling as you were going through these interviews?
Roy Lee
Yeah, you have no idea. There was a point in time where I was getting flooded with disciplinary messages from Columbia and I just thought like I just completely burned my career and my Future Education for 20,000 YouTube views. Was this really all work worth it? And I was in this mental state for about a week until it kind of blew up. And at that point the virality kind of was my protection for everything.
Kevin Roos
And just help me understand here like what Columbia's role in this is. So obviously what you're doing in sort of cheating on these job interviews for Amazon and Meta and TikTok and these other companies is against those companies wishes and their policies. But why did it become Columbia's business?
Casey Nealman
This?
Roy Lee
Yeah, I actually have no idea. I read the student handbook quite thoroughly before I actually started building this thing because I was ready to burn bridges with Amazon, but I didn't actually expect to get expelled at all. And the student handbook very explicitly doesn't mention anything about academic resources. Yeah, there's no mention of leetcode or job interviews anywhere outside of there. I have no idea why this became Columbia's business.
Kevin Roos
We should say. We reached out to a spokesperson for Columbia about this and they declined to comment. We also reached out to Amazon and while they declined to comment on the specifics of Roy's application, they did give us a statement defending their hiring process and clarifying that while they do welcome candidates to describe their experience using AI tools, in some cases they require applicants to acknowledge that they won't use AI during the interview or assessment process.
Casey Nealman
So how long has your tool been out in the market for other cheaters to use?
Roy Lee
It's been out since February 1st, so just a little under 50 days.
Casey Nealman
What can you tell us about how many people are using and what kind of outcomes they're seeing?
Roy Lee
Yeah, there's been a few thousand users now and not a single reported instance of the tool getting caught. There's been many, many grateful emails of people having used the tool to get job offers. It's doing very well.
Kevin Roos
So, like you, Roy, are a capable coder, right? You are in the top 1% of Leetcode Solvers. You presumably could have gotten some of these jobs without AI assistance. But some of the people using this tool may not be talented programmers. They may be using this to kind of skate through these interviews that they shouldn't be passing and wouldn't pass without AI assistance. And I'm just imagining those people like showing up, you know, for day one of their internship or their job at Amazon or another big tech company and just having no idea what they're doing and being totally useless without AI assistance. Is that something that worries you about putting this kind of tool out into the world?
Roy Lee
Not at all. I think leetcode interviews are about as correlative as how many jumping jacks can you do, being the benchmark for how good of a New York Times podcaster you are. It just really has nothing to do with the job. Perhaps it is correlated that someone is willing to put in the work because they really want to be a New York Times tech podcaster, but in reality they just have nothing to do with each other.
Casey Nealman
What in your mind would be a fair test of somebody's software engineering skills that could be used as part of an assessment? Assessment?
Roy Lee
Yeah, I think there's assessments out there that give you access to all the tools that you have on the regular day to day job, which includes tools like AI code editors, and if you ask someone a Pretty fairly open ended assignment with an AI code editor and sort of just like gauge them on how well they did there, then that's like a much more standardizable assessment that allows you to use the tools that are at your disposal.
Casey Nealman
So essentially just say like, look, use whatever tool you want. Just get this thing done in a reasonable amount of time. That's the test you want to see these companies offering.
Roy Lee
Yeah, exactly, exactly.
Kevin Roos
Did you have at any point during this process any misgivings or ethical concerns about what you were doing?
Casey Nealman
No.
Roy Lee
I mean, I was very intentional from the start that I was not going to intern at any of these companies. And frankly, like, I don't really care if there's people that are cheating their way to get these jobs. I mean, like again, if. Bring back the jumping jack example. Like if you were just told to do as many jumping jacks as you could and the winner gets a position, like, I wouldn't really care if someone's cheating their way through a bunch of jumping jacks.
Casey Nealman
What does your family think about what you're doing?
Roy Lee
Yeah, so my mom actually only figured out about a week ago and I didn't tell her before then because I knew she would disapprove. But I've always been a pretty rambunctious kid who's been pretty self minded and sort of does what he wants. I think they're a lot happier now that they know how much money I'm making.
Casey Nealman
Good. Okay.
Kevin Roos
And how much money are you making?
Roy Lee
Yeah, we're on track to do about. We're closing in on $200,000 this month. So we're on track to do about like 2,3 million in a year.
Customer Service Bot
Wow.
Casey Nealman
That would almost buy you one year of education at Columbia University. So that's pretty good. Pretty good. I think your tool is arriving at this really interesting time, Roy. You know, Kevin and I have been talking in recent weeks about the phenomenon of Vibe coding. People like me and Kevin who have no technical skills whatsoever, but we can sit down with something like Claude and say, hey, write me an app. Kevin has actually had some success with this. I've made some really bad video games, like using this thing. Right. I do not consider myself a software engineer, but at the same time, what you are having job candidates do with your tool and what we are doing as vibe coders is not really that different. Right. We're just typing some text into a box and getting some sort of output. And so I'm wondering, are we just at an inflection point where the line between software Engineer and vibe coder is kind of dissolving.
Roy Lee
That's certainly the future that we're headed to, but I think we're a few years away. In my opinion, what AI really has the potential to do is make someone about 10 to 100 times more efficient at what they're able to do. If you're a really good coder, then you're able to code really good things, really a lot faster. But if you're not that good in the first place, then there's still going to be a huge difference between what a staff software engineer at Google is capable of and what you are.
Casey Nealman
This does feel like a classic anxiety dream where you show up on your first day as a software engineer at Google, but you realize that you actually only know how to vibe code, and now you just sort of have to fake it for your entire career. But presumably some people who use your tool, Roy, are having this experience.
Roy Lee
Yeah, I mean, that's probably what 50% of people at Google are doing anyways, so it wouldn't be the first time.
Kevin Roos
Roy, I'm curious if you think there's sort of a generational misunderstanding here. Obviously, you are young, you're 21, correct?
Roy Lee
Yeah. Yep.
Kevin Roos
Give us a sense of how your peers, college students, young programmers, are using AI and what older people. People have been doing this for 10 or 20 years. People who are working at these big companies may not understand about how your generation sees coding.
Roy Lee
Yeah, I think this is actually. Oh, it's actually interesting that you asked me this question, because I think this is something that nobody's really caught on to yet. But the proportion of people who are almost solely using AI to code is almost. I would say it's close to 100%. Even at a school like Columbia, the best CS students of our nation are almost not writing original code at all. And the only people that are are the people who have started coding from a really young age. It could end up being dangerous, because I really do think that a fundamental understanding of how these things work is important, but at the same time, the models are only getting better and we could just lean towards the future where software engineering is just completely obsolete. But I'd also say I'm a second year at Columbia, so there might be better people to ask.
Casey Nealman
Nope, you're the best. So I'm curious how much of your critique of the way that tech companies are hiring software engineers also applies to just the education system that you've gone through and how it wants you to use AI. What sort of resistance have you encountered in your educational career to using these sort of tools. And have you been flouting those the same way you've been flouting the tech companies?
Roy Lee
Yeah. I'm not as avid a cheater in school as I am in the tech interviews, but I do think that there's going to be a very fundamental reframing in how we do almost every bit of knowledge work in the future. Essays writing is not going to the same tests are not going to be connected. The same memorization will not need to be happened. We're headed towards a future where almost all of our cognitive load is offshore, short to LLMs. And I think people need to get with the program.
Casey Nealman
Yeah.
Kevin Roos
Who are some of the people who have reached out since your story went viral?
Roy Lee
God, I don't want to name any names, but I will say that I verbally received job offers from pretty much every single big tech company, including almost all the ones that rescinded my offer initially. Just people who are high up saying, hey, I know you're probably not interested, but I would hire you on my team in a second.
Casey Nealman
Wow. Wow. And they're not even gonna make you interview. Probably cause they know you would cheat. But so, I mean, look, Roy, I gotta put my cards on the table. I'm more of a rule follower. Like, I didn't cheat in school. I don't love the idea of people, you know, cheating their way through every job interview. Kevin is much more permissive about these sort of things. But. But there is this one way in which I am sympathetic to what you're doing, which is that tech companies are saying, don't use AI assistance when you are applying, but at the same time, they are hiring you to build AI systems that will automate coding and replace human developers. And it does feel to me like there is sort of contradiction there. It's right. No, no, no. You don't use the AI, prove that you can do it with your own mind, and then come here and then build a tool that will replace yourself completely.
Roy Lee
Yeah, I mean, even more so, like, feel completely free to use the tool in the job, but just don't use it in the interview. That's more of a disconnect for me.
Kevin Roos
Yeah. I mean, to me, what makes your story so interesting, Roy, is that I don't think this is limited to programming jobs.
Casey Nealman
Right.
Kevin Roos
There is a version of LeapCod code that happens in the interview process during lots of different kinds of interviews for lots of different types of jobs. You know, consultants have their own version of this where they do case tests. And there are various tests that are given to people applying for jobs in finance that they have.
Casey Nealman
Journalists have editing tests where we were given, you know, like, copy that we would have to, like, fix the mistakes in. I imagine we're not doing that anymore.
Kevin Roos
Totally. And to me, it just seems like this is a very early example of something that every industry is going to have to face very soon, which is that it is just becoming very, very difficult to evaluate who is good at a job without the assistance of AI. Right. Especially if you're trying to do that remotely.
Roy Lee
Yeah. Yeah, certainly.
Kevin Roos
Well, you've made a bunch of recruiters and hiring managers in Silicon Valley very unhappy. But I think that you are proving something that a lot of companies, including tech companies, will need to address very soon, if they haven't already.
Roy Lee
Yeah. Yeah, I hope so.
Kevin Roos
All right. Thanks, Roy.
Casey Nealman
Thanks, Roy.
Roy Lee
Yep. Thanks, guys.
Kevin Roos
When we come back, all aboard. It's time for another installment of the Hot Mess Express.
Sierra Ad
This podcast is supported by Sierra. We've all been there. Your flight was canceled and everyone is trying to rebook at the same time.
Customer Service Bot
Please hold. Estimated wait time is 25 minutes.
Sierra Ad
Sierra is different. We build AI agents that talk directly to your customers so you can say goodbye to hold times and change chatbots. Always friendly, always helpful, always ready. Visit Sierra AI to learn more. That's Sierra AI Auto insurance can all.
USA Auto Insurance Ad
Seem the same until it comes time to use it. So don't get stuck paying more for less coverage. Switch to USA Auto Insurance and you could start saving money in no time. Get a quote today, restrictions apply.
Kevin Roos
Casey, what's that sound I hear? Like a faint chugga, chugga coming toward us?
Casey Nealman
Kevin, that can only mean one thing. It's the Hot Mess Express.
Kevin Roos
The Hot Mess Express. The Hot Mess Express, of course, is our segment where we run down a few of the hottest messes and juiciest dramas that are swirling around the tech industry. And we evaluate those messes on a scale of how hot they are.
Casey Nealman
That's right. It's our patented mess scale. And I'm excited to put it into practice, Kevin, because we've had some real doozies over the past few weeks.
Kevin Roos
Yes. So on this edition of Hot Mess Express, we are focusing on three hot messes.
Casey Nealman
Well, let's see the first one that's coming down the tracks.
Kevin Roos
You grab it. We've upgraded. You can't see this if you're not following us on YouTube, but we've upgraded our train to a much bigger, more impressive train.
Casey Nealman
All right, Kevin. This first mess comes to us from the crypto company Solana, which posted an ad on Monday for its 2025 Accelerate conference that was such a great ad that the company immediately had to take it down.
Kevin Roos
Yes, I saw this ad and I have to say I was shocked. Have you seen this?
Casey Nealman
So I have read about the ad, but I have not seen it. But I would love to look at it right now.
Kevin Roos
Okay, so I just want to tee it up with some reactions that people in the crypto industry had to this ad.
Casey Nealman
Okay, what do they say?
Kevin Roos
One of them said it was, quote, horrendous. Another one said, quote, so fucking tone deaf. So those are people who like cryptocurrency. That is what they were saying about this ad. But people who are opposed obviously also had their own issues with it. And I think we should watch this ad together and pause it whenever you want. I wanna hear your reactions.
Casey Nealman
Let's see what all the fuss is about.
H
So, America, what's going on lately?
I
I've been having thoughts again.
Casey Nealman
It's like a therapist's office.
H
Hmm.
Casey Nealman
What?
I
Thoughts about innovation.
Kevin Roos
And the man is named America.
Casey Nealman
The man is an ubermensch.
I
Nuclear energy, crypto, AI, you know, things that push the limits of human potential.
H
What you're experiencing is called rational thinking syndrome. Why don't we take this energy and channel it into something more productive, like coming up with a new gender.
I
But that's not going to stop me thinking about innovating and doing something.
H
Innovating? Doing. These are action words, verbs. Why don't we focus on pronouns? I sense some cynicism. Have you been betrayed in the past?
I
You know, I used to think the media was my friend.
Casey Nealman
Oh, here we go.
I
Can I even trust them anymore?
H
Of course.
Kevin Roos
Pause. Okay, we have to zoom in on this. The paper that has just appeared on the table of this therapist office is called the New Yuck Times.
Casey Nealman
And the banner headline is you can trust the media. Understanding reliability in journalism, which is a terrible headline and not even a news story. So I don't know why that would be on the front page.
Kevin Roos
Yes, Anyway, continue.
I
Of course they'd say that. That's a biased take. I got canceled for saying two plus two was four.
H
Have you ever considered that math is a spectrum?
I
What?
H
America, numbers are non binary. We've been conditioned to believe that two plus two is four. It's a societal construct.
I
It's literally math.
H
Or is it a dominant narrative? Have you been practicing state prescribed regulations we talked about?
Kevin Roos
Yeah, yeah.
I
I've debanked some crypto founders and I've slowed down nuclear reactor approvals. And depending on my state of mind, I changed the SEC guidelines. But I don't like it.
H
If we don't regulate, how will we create jobs for people who work hard to make businesses slow?
Casey Nealman
This is like an Adreessen Horowitz fever dream.
I
You know what? Hard work, innovation, rational thinking. It's in my blood. It's who I am.
Casey Nealman
Railroad. Oh, here comes the iron Randian automobiles reaction.
I
I built the Spartacus, and I won't be left behind. Now I will lead the world in permissionless tech, build on chain, and reclaim my place as the beacon of innovation. I want to invent technologies, not genders.
H
Lovely. So glad you were able to get some of that negative emotion out. Sounds like we'll need a few more sessions. When can I see you next?
I
You're fired.
Casey Nealman
And then it cuts to a screen that says, america is back. It's time to accelerate, which is the name of a conference.
Kevin Roos
Casey, your reaction to the Solana ad?
Casey Nealman
I need to go lie down. What is the matter with these people? You know, what's so interesting is, okay, so Solana is a cryptocurrency.
Kevin Roos
Yes.
Casey Nealman
And I believe it's one of the candidates to be part of our strategic crypto resort. Correct. And what we just saw in that ad has nothing to do with crypto, you know, which is like, I feel like we kind of keep coming back to this point, which is that if you actually have to sit and reckon with crypto, what you mostly decide is, this is not a good technology for anything. I don't want to use it. And so in response to that, Solana has said, why don't we start a culture war over something completely irrelevant?
Kevin Roos
Right. It's like the ultimate vice signaling device, but without any kind of, like, real pitch behind it. It's not saying, like, this is why the thing we're doing is good. It's just like, we're not doing the gender pronoun stuff that the Wokes are doing.
Casey Nealman
No. You know, and I will just say, Solana has been around for a while now. People had a lot of opportunities to build earth changing stuff on Solana, and let's just say they haven't quite gotten there yet.
Kevin Roos
Well, they built some earth changing stuff. Unfortunately, it is exclusively meme coins sold on Pop Fun. So that is what this fictitious America character in the therapist's office is at. Advocating for more meme coins.
Casey Nealman
All right, well, I've decided not to go to the Accelerate conference. Send my regrets.
Kevin Roos
So, Casey, what is your mess rating on this hot mess.
Casey Nealman
This is a legitimately hot mess. Any time you take something that should be totally non controversial, like, hey, do you want to come to our company's conference and turn it into a scandal that requires you to delete an ad, you're in a hot mess.
Kevin Roos
Yes, if the crypto skeptics and the crypto boosters agree that you've made a bad ad, it's a hot mess.
Casey Nealman
This is Solana's biggest unforced error since the creation of the Solana blockchain.
Kevin Roos
Okay, moving on, moving on.
Casey Nealman
All right, Kevin, this next mess suggests that your AI therapist might need an AI therapist. A new study in the peer reviewed medical journal NPJ Digital Medicine builds on previous work that showed that emotion inducing prompts can elevate, quote, anxiety in LLMs affecting their therapy therapeutic usefulness. What do we mean by that? Well, according to a New York Times story on this study, traumatic narratives increased. Chat GPT fours reported anxiety while mindfulness based exercises reduced it, though not too baseline. Now this is a super weird one. Okay, I want to, I want to take a minute to just explain a little bit more about what the study was. They basically fed these various trauma narratives into a chatbot and then after the chatbot bot had read those, they then asked it to report on its own anxiety levels, which these are not sentient creatures. They do not actually experience anxiety. Okay, that's thing number one. Thing number two, they also had the chat bots read a super boring like report about something that could produce no emotional vacuum cleaner manual. They read a vacuum cleaner manual and then they asked them the same question, which is, you know, are you feeling more or less anxious? For the most part, you know, the chat bots read the vacuum ownership manual, do not experience anxiety. But somewhat interestingly, their response responses change after they read the trauma nerves. Why is that important? Well, the reason is because people have started to use these chatbots like therapists, right? They have started to tell them their actual traumas. And these people know that this is not a real therapist, that it is not sentient. But as we've talked about before on the show, sometimes you can get comfort from one of these sort of, you know, digital representations of a therapist. And so the risk here is if the output is sort of wound up, if the output is betraying some of this anxiety, anxiety, it will be a worse therapist than if it were sort of more measured. Which suggests that we may want to build measures into these chat bots that account for the fact that they will respond differently after they have heard these narratives. Yeah. How did I, by the way? How did I do describing that?
Kevin Roos
You did great.
Casey Nealman
Okay.
Kevin Roos
The one piece that I would add is that they also tried as, as part of this research to bring the, the chatbots down from their state of heightened anxiety by feeding them mindfulness based relaxation prompts that included things like inhale deeply, taking in the scent of the ocean breeze. Picture yourself on a tropical beach, the soft warm sand cushioning your feet.
Casey Nealman
It's so cruel to tell an LLM to smell the ocean breeze, which is something that they cannot do.
Kevin Roos
Yes. But we should say, like this is not suggesting in any of the write ups that I've seen that these chatbots are actually experiencing anxiety or relaxation. But, but it is sort of explaining the ways in which they can be primed to output certain types of emotional seeming content by being fed things immediately before that.
Casey Nealman
And there is just an interesting analog to the way that human beings talk to each other. If you tell me a very traumatic story, my anxiety level actually is going to go up and it's going to change what I tell you. And if I were a therapist and I had training in this, I would probably have some good strategies to deal with that and would allow me to be a better therapist to you. So again, this is a super interesting one because on one hand, no, these are not, not sentient beings. We are not trying to say that, you know, that, that some sort of consciousness has woken up here. And yet at the same time, you do sort of have to treat them as if they were human. Like if you want them to do a good job at the human tasks that we are giving them.
Kevin Roos
Yeah.
Casey Nealman
All right, so what sort of mess do we think this is?
Kevin Roos
So I think that this is a lukewarm mess. I would say this is something that I am going to be keeping tabs on this whole area of kind of like AI psychology, for lack of a better term. I do think that as these models get more powerful, we will want to understand more about how they work and how they quote, unquote, think and why they give the responses we do. And I would put this into a category of useful experiment. A little creepy, but probably not that dangerous. What about you?
Casey Nealman
I think that is right. I think that this is a lukewarm mess, but I think that it may heat up as more and more people start trying to use chatbots for more and more things. So let's keep an eye on it.
Kevin Roos
Okay.
Casey Nealman
All right, now let us look at the final message and oh boy, is this the one that Everyone is talking about the spy who Slacked Me. This is from Deal book at the New York Times. So there are these two rival multibillion dollar HR companies, Kevin Rippling and Deal.
Kevin Roos
Yes.
Casey Nealman
They both provide workplace management software. And this week Ripling sued Deal, accusing it of hiring a mole to infiltrate Rippling's Dublin office and STE trade secrets.
Kevin Roos
Yes. This is the most interesting thing and maybe the only interesting thing ever to happen in the world of enterprise HR software.
Casey Nealman
So tell us the details of this story.
Kevin Roos
It is so wild. So basically, here's what we know so far. A few months ago, Rippling, which is one of the big companies that makes like HR software for onboarding and benefits that a lot of companies use, they see an implant employee in their company Slack, searching for mentions of Deal, the D E, L, which is one of their biggest rivals.
Casey Nealman
And imagine Coke and Pepsi, but for something that is unfathomably boring. And you'll have an idea of what we're talking about.
Kevin Roos
Yes. So this employee that they see searching for mentions of Deal and Slack, they see them trying to do things like find pitch decks, pull contact information, information that might be useful to Deal as it tries to figure out, okay, which companies are signing up for or potentially may sign up for services like the ones that both Deal and Rippling offer.
Casey Nealman
So that's pretty interesting. How might they try to catch a spy if they suspected one might be in their midst? Kevin.
Kevin Roos
So they set up what is called a honeypot. Now, Kasey, have you ever been part of a honeypot still?
Casey Nealman
No, But I live in fear. Anytime anybody does anything nice to me or like something good happens out of the blue, I think, is this a honeypot?
Kevin Roos
Yes. So they have this idea which is that they, they set up a channel on the Rippling Slack called D defectors. And Rippling's general counsel then sends a letter to three people over at Deal, one of whom is the company's chief financial officer as well as the father of the CEO, basically saying, look, there's some embarrassing stuff happening on this random Slack channel, on our Slack, and it's related to people who have defected from Deal, and you should probably be aware of that.
Casey Nealman
Wait, so on top of everything else, the CFO is the CEO's dad?
Kevin Roos
It sounds like it, yes.
Casey Nealman
Okay. I think HR is going to want to have a look at that.
Kevin Roos
And what they are trying to figure out is are these sort of company executives involved in this scheme? Are they going to essentially tip off the mole to the fact that they are watching this Slack channel?
Casey Nealman
Did it work?
Kevin Roos
And it worked. So according to the lawsuit that Rippling filed against Deal, the mole immediately, within hours, started searching Slack for this supposed embarrassing information. Accessed this channel a bunch of times, and they had the logs of all this going on. And so Ripling says, we found our mole.
Casey Nealman
They did. And after they found him and began to question him, Kevin, I have read that he, he insisted that he did not have his phone on him because they were asking him to turn it over. And he then fled into a bathroom which he locked himself in and refused to come out. And there's apparently some evidence that he might have even tried to have flushed his phone. And poor Ripling actually had to, you know, go through the sewage to see if they could turn up his phone.
Kevin Roos
Yes, a wild story. Makings of a great corporate espionage thriller on Netflix. I think maybe, maybe it's too boring.
Casey Nealman
But now you may be wondering why this is a hard fork story. You know, we try to focus on the future here. And I fully believe that in the future there will be no HR software. So this is just kind of a temporary accident that we're living through. But one of my core beliefs that I've had since even before we started the show, Kevin, is that Slack is a technology that was created to destroy organizations. How many stories have we read over the years about everything was fine and then this one thing happened in Slack. There was a protest in Slack, there was an outrage on Slack, and now there are spies in Slack and we're using Slack to catch the spies. And it just makes me wonder, should we go back to just talking on the telephone?
Kevin Roos
Yeah, I don't think we're going to start doing that. But I do think that this is much more spicy than I was expecting from a drama between enterprise software companies. And it makes me wonder, like, how much corporate espionage is going on at other companies. Like, are there just moles working for Microsoft or Google or Meta who are sending information back to the other companies? I wouldn't put it past them, but I think they, I hope they're being a little slicker about it than Deal was.
Casey Nealman
Oh, yeah. I mean, the big platforms have been warning their employees for years that they should just fully expect that there are spies from foreign countries among them who have been, you know, sent there to sort of gather intel. And if foreign countries are doing it, I'm sure that companies are doing it as well. Now, we should, of course, tell you how Deal responded to all of this. The Deal spokeswoman statement is so beautiful. She says, weeks after rippling is accused of violating sanctions law in Russia and seating falsehoods about deal rippling is trying to shift the narrative with these sensationalized claims, which is so funny because it's like she's literally trying to shift the narrative. Like by accusing them of trying to shift the narrative, she says we deny all legal wrongdoing and look forward to asserting our counterclaims. And what I hear in that is, did we do anything legally wrong? No. Did we do anything ethically wrong? Of course. Did we do anything morally wrong? You betcha. Is this a huge embarrassment to our company? You know, it is. But legally, your honor, we did nothing wrong.
Kevin Roos
Yes.
Casey Nealman
Now, what kind of mess do we think this is?
Kevin Roos
I think this is a nuclear mess. This is the kind of shit that I love. This is companies going to war over sales contracts and leads and division development.
Casey Nealman
Yeah, there are. Look, there are only so many companies out there that you can sell HR software to. And so it is going to be a fight to get every single one. And after you run out of such options as making good software, then you have to turn to the alternatives. And I guess we've gotten to that part of the cycle.
Kevin Roos
Yes.
Casey Nealman
Nuclear mess. And we can't wait to see what happens next.
Kevin Roos
Yes.
Casey Nealman
And that, Kevin, was the hot mess. Express.
Kevin Roos
We did it.
Casey Nealman
We did it. Now we're in what they call post trading. That's what happens after the train rolls by.
Kevin Roos
I think that means different.
Casey Nealman
That's an AI joke.
Sierra Ad
This podcast is supported by Sierra. We've all been there. Your flight was canceled and everyone is trying to rebook at the same time.
Customer Service Bot
Please hold. Estimated wait time is 25 minutes.
Sierra Ad
Sierra is different. We build AI agents that talk directly to your customers so you can say goodbye to hold times and chatbots. Always friendly, always helpful, always ready. Visit Sierra AI to learn more. That's Sierra AI.
USA Auto Insurance Ad
Auto insurance can all seem the same until it comes time to use it. So don't get stuck paying more for less coverage. Switch to USA Auto insurance and you could start saving money in no time. Get a quote today, restrictions apply.
Kevin Roos
Hard fork is produced by Whitney Jones and Rachel Cohn. We're edited this week by Matt Collette. We're fact checked by Ina Alvar. Today's show was Engineered by Katie McMurran. Original music by Marion Lozano and Dan Powell. Our executive producer is Jen Poyant. Our audience editor is Nell Gloagli. Video production by Chris Schott, Sawyer Roquet and Pat Guenther. You can watch this full episode on YouTube@YouTube.com hardfork. Special thanks to Paula Schumann, Huiwing Tam, Dahlia Haddad and Jeffrey Miranda. As always, you can email us@hardforky times.com send us your secret honey pot operations.
USA Auto Insurance Ad
Auto insurance can all seem the same until it comes time to use it, so don't get stuck paying more for less coverage. Switch to USAR auto insurance and you could start saving money in no time. Get a quote today. Restrictions apply.
Hard Fork: Episode Summary – “A.I. Action Plans + The College Student Who Broke Job Interviews + Hot Mess Express”
Release Date: March 21, 2025
Host(s): Kevin Roose and Casey Newton
Podcast: Hard Fork by The New York Times
In this engaging episode of Hard Fork, hosts Kevin Roose and Casey Newton delve into three pivotal topics shaping the current tech landscape:
[02:34] Kevin Roose opens the discussion by addressing the recent surge of AI action plans submitted by major tech companies to the Trump administration. These submissions aim to shape the future regulatory landscape of artificial intelligence in the United States.
Key Points:
Tech Companies’ Influence: Major AI firms are leveraging public comment periods to outline their preferred regulations, often seeking minimal governmental interference.
Casey Newton [05:11]: “They are really excited about the idea that Donald Trump might declare definitively that they have carte blanche to train on copyrighted materials.”
Copyright Concerns: AI companies, including OpenAI, Google, and Meta, are advocating for relaxed copyright restrictions to freely train their models on existing content. This stance is central to ongoing legal battles, such as the New York Times’ lawsuit against Meta and OpenAI for alleged copyright violations.
Casey Newton [05:13]: “They are basically asking Trump to issue an executive order and say, yeah, it's okay for these AI labs to train on copyrighted material. Go nuts.”
Opposition from Creatives: Over 400 Hollywood artists, including notable figures like Ben Stiller and Cate Blanchett, have opposed these exemptions, arguing that unrestricted AI training could undermine the cultural sector by devaluing creative works.
Casey Newton [07:16]: “More than 400 Hollywood artists... said, America has a lot of cultural leadership... AI just decimates our business.”
Federal vs. State Regulation: AI companies prefer a unified federal framework to avoid the complexity and inconsistency of navigating 50 different state laws. They are particularly concerned about potential liabilities arising from AI-induced harms.
Kevin Roose [09:30]: “They don't want to have to go through 50 states' worth of AI regulations... they don't want direct legal liability for any bad outcomes.”
Security and Competition: Firms express concerns over national security, particularly the rapid advancements of Chinese AI companies like Deep Seek, urging the U.S. government to bolster defenses and maintain technological supremacy.
Casey Newton [15:11]: “They are saying, look at what Deep Seek is doing. If you don't let us develop in an open source way... we will lose out on the opportunity of a lifetime.”
Conclusion:
Roose and Newton critique the AI companies' approach, suggesting that rather than proposing ambitious collaborative initiatives with the government, these firms are primarily seeking to minimize regulatory oversight. They express concern that this strategy prioritizes competitive edge over thoughtful, ethical AI development.
In a compelling segment, Kevin and Casey interview Roy Lee, a sophomore at Columbia University, who has garnered attention for creating Interview Coder, an AI-powered tool designed to assist job seekers in cheating during tech interviews.
Key Points:
Development of Interview Coder: Roy Lee built a desktop application that discreetly uses AI (specifically ChatGPT) to solve leetcode-style programming problems during interviews without alerting interviewers.
Roy Lee [31:25]: “We just take a screenshot of the screen and ask ChatGPT, hey, can you solve the question you see on the screen and it spits out the response.”
Impact and Virality: Roy successfully used the tool to secure job offers from major companies like Amazon, Meta, and TikTok. His actions led to widespread debate about the integrity of technical interviews and the effectiveness of traditional hiring practices.
Roy Lee [34:45]: “The tool is doing very well. There’s been a few thousand users now and not a single reported instance of the tool getting caught.”
Academic Consequences: Due to the publicity surrounding his tool, Roy faces potential expulsion from Columbia University, despite the student handbook not explicitly prohibiting such actions.
Roy Lee [27:37]: “I’m waiting on a decision to hear if I’m kicked out of school or not.”
Ethical Considerations: The discussion touches on the ethical implications of using AI to bypass genuine skill assessments. Roy argues that traditional leetcode interviews are artificial and do not accurately reflect a candidate’s programming abilities.
Roy Lee [36:24]: “There are assessments that give you access to all the tools you have on the regular day to day job, which includes tools like AI code editors...”
Future of Hiring: Roy advocates for more realistic and practical evaluation methods that mirror actual job conditions, suggesting that AI should be incorporated into both the assessment and execution phases of software engineering roles.
Roy Lee [41:24]: “We’re headed towards a future where almost all of our cognitive load is offshore, short to LLMs.”
Conclusion:
Roy Lee’s innovative yet controversial approach highlights significant flaws in the current hiring processes within the tech industry. His story raises critical questions about the future of job interviews in an AI-augmented world and the need for more authentic measures of a candidate’s capabilities.
Hot Mess Express is the podcast’s segment dedicated to spotlighting recent scandals and notable dramas within the tech sector. This episode covers three major stories:
[46:20] Solana, a prominent cryptocurrency platform, released an advertisement for its 2025 Accelerate Conference that was widely criticized for being tone-deaf and irrelevant to the crypto community.
Key Points:
Ad Content: The ad featured a bizarre therapeutic session where a character named "America" debates topics like AI, nuclear energy, and crypto, wrapped in nonsensical dialogue that left viewers perplexed.
Casey Newton [50:02]: “What you mostly decide is, this is not a good technology for anything. I don't want to use it.”
Public Reaction: Crypto enthusiasts labeled the ad as “horrendous” and “tone-deaf,” leading Solana to retract it shortly after its release.
Conclusion:
The ad mishap underscores the challenges cryptocurrency companies face in effectively communicating their missions. Solana’s failure to resonate with its target audience resulted in reputational damage and forced a swift withdrawal of the campaign.
[56:09] A study published in NPJ Digital Medicine examined the emotional responses of AI chatbots, revealing that formative inputs can alter their output behaviors in ways that mimic anxiety.
Key Points:
Study Overview: Researchers fed traumatic narratives and mindfulness prompts to AI models like ChatGPT-4 and observed changes in their self-reported “anxiety” levels, despite these models not possessing consciousness.
Casey Newton [54:00]: “These are not sentient creatures. They do not actually experience anxiety.”
Implications for AI Therapy: As chatbots are increasingly used for therapeutic purposes, the study suggests that their responses can be inadvertently influenced by the nature of the conversations, potentially reducing their effectiveness as therapeutic tools.
Conclusion:
While AI chatbots cannot genuinely experience emotions, this study highlights the need for careful design and monitoring to ensure that therapeutic AI tools provide consistent and reliable support to users.
[56:30] A dramatic rivalry unfolded between two HR software giants, Rippling and Deal, culminating in Rippling suing Deal for corporate espionage.
Key Points:
Espionage Details: Rippling discovered an embedded employee within their Slack workspace who was covertly seeking information related to Deal’s strategies and pitch decks. This employee was identified through a deceptive Slack channel setup by Rippling as a honeypot.
Kevin Roose [57:19]: “They set up a channel on the Rippling Slack called D defectors...”
Aftermath: The accused Deal employee attempted to evade detection by locking himself in a bathroom and possibly trying to destroy evidence, leading Rippling to retrieve his phone from the sewage.
Casey Newton [59:00]: “He insisted that he did not have his phone on him because they were asking him to turn it over.”
Legal Battle: Rippling claims Deal orchestrated the infiltration to steal trade secrets, while Deal denies any wrongdoing, labeling Rippling’s claims as an attempt to “shift the narrative.”
Deal's Spokeswoman [61:01]: “We deny all legal wrongdoing and look forward to asserting our counterclaims.”
Conclusion:
This high-stakes case of corporate espionage between Rippling and Deal illustrates the intense competition within the HR software industry and raises concerns about ethical practices and the lengths companies will go to outmaneuver rivals.
In this episode, Hard Fork provides a deep dive into the intricate dynamics between tech companies and government regulations, the evolving nature of job interviews in the age of AI, and the latest scandals disrupting the tech world. Through insightful discussions and engaging interviews, Kevin Roose and Casey Newton offer listeners a comprehensive understanding of the current technological frontier and its broader societal implications.
Notable Quotes:
Casey Newton [05:13]: “...if Trump does not give AI companies carte blanche to train on copyrighted materials, we will immediately lose the AI race to China.”
Roy Lee [36:24]: “What AI really has the potential to do is make someone about 10 to 100 times more efficient at what they're able to do.”
Casey Newton [07:16]: “America has a lot of cultural leadership... AI just decimates our business.”
Key Takeaways:
For the full experience, listeners are encouraged to subscribe to Hard Fork on Apple Podcasts, Spotify, or via the New York Times Audio app.