Beau Friedlander (40:32)
Yeah. Virginia Heffernan, author of the book Magic and Loss. The Internet is art. And her active substack called also Magic and Loss. There's a link in the show description. So what just happened? Phil Connors breaks the loop by stopping the con. He quits treating knowledge as leverage and asks a different question. What if I actually helped? What if I actually got into the. Thank you. Thank you. The Internet hasn't asked that question yet. Really? Not really. Maybe AI is starting to, but. But LLMs also just started to advertise, so maybe not. Ancient merchants learned about customers so they could sell them wheelbarrows. And eventually that curiosity became care, like, as it would, I guess. And the Internet went the other direction. It learned about us to own us. Phil Connors escaped Groundhog Day by understanding that knowledge without care is just another way to run a scam. Maybe that's our way out too, you know, is the scam. All the data out there that's associated with me and about me, I think. Yeah, I do. And that's why I try to keep as little as I can out there on the Internet. So with that in mind, now it's time for the Tinfoil Swan, our paranoid takeaway to. To keep you safe on and offline. Are you using an LLM? And if so, how good is your security? How good is your privacy practice? What are you putting in there? Let's talk about it. So here's what you need to know. Most AI chatbots, ChatGPT, Claude, Gemini. They're training on your conversations by default. That means that when you open the box of this particular product, it starts to take things from you. By default means everything you type could end up teaching the next version of the model. Your ideas, your drafts, your strategies could get coughed up in someone else's query. First rule, check your Privacy settings in ChatGPT turn off chat history and training in Claude. Look for data retention controls. Look for the incognito, because you can also say, don't train on my content there. Google's Gemini has similar options. These settings do exist, but they're not. You know, if an LLM were a person, this is not something that's tattooed on their knuckles or their forehead. You've got to go looking for them, and then when you do, flip them to the right setting. Now, second, never put sensitive information in a chatbot, okay? I know you're writing the next great novel and you're putting it on there. Don't do it because it's. It's anybody's novel now. No passwords, no proprietary code, no confidential client data, for sure. No unpublished work. If you care about protecting it. Now, is someone really gonna. Is it really gonna go walkabout? I don't know. And that's why I need to be careful. If you wouldn't post it publicly, rule of thumb, don't put it on an lm. Third, use incognito. Use it or a temporary chat mode there. It'll be something different depending on what LLM you're on now. They're designed to delete your conversations after the session ends. You can also just delete chats when you're done. With most LLMs, after 30 days, it's out anyway. It's gone. They don't know. They're not storing everything. Or are they? We don't know. Again, be careful about what you post now, fourth, read the terms of service and pri and the privacy policy. And, yeah, I know nobody does, but listen, there's a trick now, and it's funny because it's the LLM itself. Ask it to summarize what they're collecting and how they're using it. And you can even ask the LLM to. To give you the questions, to ask it to protect whatever it is you're trying to protect. And remember, finally, that companies change their policies, so what might have been true last year may not be true now. So any assumptions you're making about your content that you're putting on these LLMs, bear in mind, if it's something really sensitive, check today, check the day of, and see what's going on. Because policies change now. What's private today? We've learned this a long time ago. Not private today, tomorrow, no idea. The Internet learned how to own us a long time ago. Right now, you can get your. You can get yourself. You know, you can get some agency back. You don't need to be owned by the Internet. It's hard, but you can start reducing the ways in which you are known to the Internet. And Delete me is a perfect way to start. But here's the deal. In the same way the Internet did what it did, AI can too. So don't hand AI the same playbook. Be careful if you're using an AI chatbot. Just remember, set the privacy settings super tight, make it forget what you're doing, don't let it train on your data and and don't put anything on there that you don't want people to see because breaches happen and, and you don't want your stuff to be be wrapped up in that. Okay, Stay safe out there. We'll talk to you next week. Thanks for listening. Bye bye. What the Hack is produced by Beau Friedlander. That's me and Andrew Stephen, who also edits the show. What the Hack is brought to you by Deleteme Deleteme makes it quick and easy and safe to remove your personal data online and was recently named the number one pick by a New York Times wirecutter for personal information removal. You can learn more about Deleteme if you go to joindeleteme.com wth that's joindeleteme.com WTH. And if you sign up there on that landing page, you will get a 20% discount. I kid you not, a 20% discount. So yes, color me fishing, but it's worth it.