Loading summary
A
At vrbo, we understand that even the best of plans sometimes need a little support. So we plan for the plot twists. Every booking is automatically backed by our VRBO Care guarantee, giving you confidence from the very start. Whenever you need help, it's ready before your stay, through the moments in between and after your trip. Because a great trip starts with peace of mind and maybe a good playlist. But we've got the peace of mind part covered.
B
If you're a maintenance supervisor at a manufacturing facility and your machinery isn't working right, Grainger knows you need to understand what's wrong as soon as possible. So when a conveyor motor falters, Grainger offers diagnostic tools like calibration kits and multimeters to help you identify and fix the problem. With Grainger, you can be confident you have everything you need to keep your facility running smoothly. Call 1-800-granger clickranger.com or just up by Granger are the ones who get it done.
C
Meta has got itself in some hot water once again. There's a new class action lawsuit. Essentially, people are suing it because of its AI powered smart glasses. The Meta ray bans. You know, I mean, we literally just had a Super bowl commercial about these. And essentially what's been happening is that there are human contractors overseas that review the footage apparently. And the class action lawsuit is that most users wearing these don't know that there's other people overseas reviewing the video. And especially because it's been kind of marketed as, you know, you have like content security. And there's been a whole bunch of, you know, sensitive footage, right, including people going to the bathroom or having sex or appearing nude. There's all sorts of, you know, there's all sorts of things that have apparently been reviewed by people over in Kenya there, there's kind of an investigative company over in Sweden, newspaper called Spenska Do Blajit, who basically worked with some of the Kenyan based subcontractors that were hired by Meta and asked them about, you know, what, what types of video clips they were reviewing that came from these Meta ray bans. So anyways, today on the podcast we're getting into this huge controversy for Meta, what this means for the future, who else is in this space, what we can expect to see in the future. Before we get into that, I wanted to mention if you want to try any of the AI models I talk about on the show, I'd love for you to try out my own platform, which is AI box AI. Basically you get access to over 40 of the top AI models for 8.99amonth. It's way cheaper than ChatGPT is. 20 bucks a month and you get access to a ChatGPT, Grok, Anthropics, Claude, Google, Gemini, you get 11 labs for audio, tons of cool image models. There's a whole bunch of stuff on there. There are over 40 different models of all the top different companies and it's 8.99amonth and you get 20% off if you get an annual plan. So it's a great value. Go check it out. You also can use AI to automatically build tools for you just by describing them, even if you're not a developer like myself. Okay, let's get into what's going on with Meta. So when the controversy first kind of broke and everyone was like, oh my gosh, why are people reviewing my Meta Ray Ban videos? You know, like if I'm going to the bathroom or something and like, and there's a video, like, I mean, first of all, I don't really know why someone would be review recording themselves going to the bathroom, but if they wanted to, I guess that's up to them. But like, beyond that, I think maybe people are concerned because these things have cameras on that. The cameras are viewable even while there's not footage being recorded now. So I think just a lot of trust has been lost in the device for, you know, a lot of different things. So when, when the first, when the controversy first kind of came out, Meta said like, look, we have tools in place that blur the faces of people in this quote unquote reviewed footage to kind of protect their privacy. But a bunch of sources that were actually working on this said that all those types of like face blurring safeguards don't actually always work. So like, yeah, sometimes the face is blurred, but sometimes it's not. And because of this, UK's Information Commission Office actually started looking into all of this and I think now this is kind of escalated to the U.S. there's a newly filed federal lawsuit which is accusing Meta of misleading consumers about the privacy protections of their AI glasses. I think that's kind of the biggest thing, right? Like if you want to strap a camera to yourself and go about all your daily tasks, you might expect, you know, that some, that there could be issues with the footage maybe being leaked or something. Although honestly I feel like just no one would ever expect this. Although the pessimistic side of me thinks that this could happen. I think you probably, you know, there's like the conspiracy theory that Apple's iPhones are always listening to and the cameras are always on. And you see all the laptops or you kind of COVID the laptop camera. There's all those, like, laptop camera cover things so people don't hack into it. Like, so there is like, that kind of concern if you have a camera that could be hacked or viewed or leaked or etc. Etc. But to be actually, like, explicitly coming, happening from the company and in a way that's systematic, and they're like, yeah, this is just. This just happens, I think, really catches a lot of people off guard. And so beyond just catching people off guard, though, the lawsuit is accusing them of basically misleading customers about the privacy protection of the glasses. So the complaint was brought by two different plaintiffs, Gina Bartone from New Jersey, Mateo Canu of California, and it was filed by the public interest law firm Clarkson Law Firm. So according to this whole law lawsuit, Meta said that the glasses have, like, a marking on them. They, you know, they say when you go buy them, there's things like, designed for privacy and, you know, controlled by you and built for your privacy. Like, this is all the slogans that Meta has all over these glasses. In the lawsuit, they're arguing that those claims are giving customers the impression that the footage captured by the glasses is going to remain private and under their control, which is what I would assume, rather than, you know, being sent overseas to have contractors review it for, quote, unquote, quality. Like, man, that's the worst. So this lawsuit is now saying that neither of the plaintiffs saw any clear disclosures indicating the footage from the glasses could be reviewed by human workers as part of Meta's AI training process. They also say that they would not have purchased the product if they had known about the company's review pipeline. So if, you know, if, if I'm going to go buy these glasses, I would like at least some sort of disclosure saying, by the way, if you film stuff on this, like, people are going to be watching your videos for quality assurance. Okay, well, I would like to opt out of all quality assurance. I don't want anyone watching my personal videos. Like, can you imagine if every video you've recorded on your iPhone and maybe this is the case and I'm just unaware, but is sent overseas to be reviewed by someone would feel like a major invasion of your privacy. Clarkson Law Firm also highlighted this basically kind of what's going on here. In 2025 alone, more than 7 million people bought the Meta Smart glasses, which, like, is a product I've been pretty excited about, pretty bullish on it. In regards to kind of AI, it's a cool product. I think it's kind of trendy. According to this lawsuit, though, footage captured by the devices can be routed into a data pipeline used to train Meta's AI systems, and users are not able to opt out of this process when using certain features. So Meta was talking to BBC about this because, yeah, like, I'm sure in Meta's mind they're like, gee, this is kind of the ultimate gold mine, right? Like, we can capture so much data through the glasses, through the voice, we could use this to train your AI model, make it better and better, yada, yada. So I think this is kind of Meta's, this is kind of Meta's incentive here. And even when it comes to, like, people overseas reviewing the footage, what's probably happening is the footage that's being filmed is just like included in their AI data training set. And those people overseas are doing, you know, like data labeling for training AI models. But, you know, all of a sudden the data labeling has everyone's personal videos in it. And that's, I think, where it's a major invasion of privacy. I mean, honestly, the whole using the data to train AI models, I would say, is an invasion of privacy. And we wonder why some of these AI models get so good and where they get their data from. And it's just half the time, sneaky ways that you don't know companies are stealing your data. But, you know, that's another conversation. In any case, talking to the BBC about all of this, they did acknowledge that when a user is sharing content with Meta AI, the company is going to use contractors to review that data in order to improve the system. They, they said that this practice is, they're like, look, we disclosed it in our policies, like that giant terms of service and privacy policy somewhere. It's in their supplemental terms of service. So not even in their main one in their supplemental terms of service. But there are some reporters that noted that references to human review were very hard to find and were more clearly spelled out in Meta's UK AI terms and not so clear in the US disclosure, which is interesting. So if you're over in the UK and you read the terms, you might actually understand that's happening in the us. It was not very clear. So one version of the policy said that Meta might review interactions with its AI systems, including the content of conversations and messages, and that such reviews, quote, may be automated or manual human. So the lawsuit is basically kind of focusing on how the glasses were marketed. I think it's talking a lot about the promotional materials, maybe not so much about the terms of service, because I think basically every company, right, in the terms of service, these things are like 5,000 page documents. They could put anything in there that they want, no one's ever going to read them. But when you do the marketing materials, I mean, if there's only so many slogans that they're throwing around, they're very prominent on their website. This is what everyone sees. And so I do think that this lawsuit will, will. I think it, it has a high potential to win. But they have a good point here because of all the promotional material kind of emphasizing the privacy controls and you know, telling us that we had control of our data and content. So there's a lot of critics right now that are arguing that this is a lot more complicated than it seems. They're saying, you know, some features of the glasses, like the multimodal AI capabilities that kind of look at your surroundings in real time, those require sending the captured images to meta systems. And in those cases, images like that are going to be processed by an AI. They're not stored on the user's device. Right. Because if you're looking around and you're like, hey, I'm looking for like the coffee shop that's supposed to be on the street, like, do you see it? And then eyes scanning your video feed and it's like, oh, it's, it's, you know, over to your right, go that way. Like the glasses can do cool things like that. But yeah, you obviously do have to send it to AI, so that is kind of a no brainer. But I think that then taking that data and using it for training and improving the model and having humans kind of in the loop of that is a whole nother thing. I think, like I wouldn't really like my, my private pictures and videos and images on my phone sent to an AI model to be trained. But still that's better than in my opinion, like sending it to a human to go review and look through every single picture on your, on your camera roll. I don't think anyone really wants that. Meta did not comment directly on the lawsuit yet in any sort of public way. They did have a statement where their spokesperson, Christopher Sergo said that the glasses are designed to allow users to interact with AI hands free and that captured media remains at the users on the user's device unless it's intentionally shared with meta or others. It's like, look, if you sent us pictures of yourself going to the bathroom, that's because you intentionally shared them with us and you wanted to. They said, quote, when people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do. And then they also said that Meta filters data to protect privacy and reduces the likelihood of identifying information so people don't know it's you that they're looking at in any case. Right. I think at the end of the day it's kind of highlighting this concern that we call luxury surveillance devices. Basically smart glasses and other of these kind of like always on AI devices. They're becoming more and more. A lot of critics are saying that this tech is bringing a bunch of new questions about consent bystander privacy. I think for now the lawsuit is just seeking monetary damages and the court order is going to require changes to Meta's disclosure and all of their marketing. Which if, if I'm being honest, I think that this is very fair. If Meta is saying, look, all of this is you, you have like tons of privacy and you get control over your data, I think you, you definitely do not want your, your data being sent and viewed by other people. So change their, they can change their marketing and they can change their disclosures. I don't think the product's going to change. And the way that they do, a lot of things probably won't change. Some people don't mind, some people will mind, but I think the way that they market it is going to have to change. So it'll be interesting to see what happens with this lawsuit. I'll follow it as it moves forward. Thank you so much for tuning into the podcast today. If you enjoyed the episode, make sure to leave a rating review wherever you get your shows. And if you want to try out all of the latest AI models in one place for less than $20 a month only 8.99 go check out AI box AI. I'll leave a link to the description. Have a great rest of your day.
B
If you're a maintenance supervisor at a manufacturing facility and your machinery isn't working right, Grainger knows you need to understand what's wrong as soon as possible. So when a conveyor motor falters, Grainger offers diagnostic tools like calibration kits and multimeters to help you identify and fix the problem. With Grainger, you can be confident you have everything you need to keep your facility running smoothly. Call 1-800-GRAINGER clickgrainger.com or just stop by Grainger for the ones who get it done. If you're a maintenance supervisor at a manufacturing facility and your machinery isn't working right, Grainger knows you need to understand what's wrong as soon as possible. So when a conveyor motor falters, Grainger offers diagnostic tools like calibration kits and multimeters to help you identify and fix the problem. With Grainger, you can be confident you have everything you need to keep your facility running smoothly. Call 1-800-granger. Click granger.com or just stop by Granger for the ones who get it done.
Date: March 6, 2026
In this episode, The AI Podcast explores the unfolding controversy around Meta's Ray-Ban Smart Glasses and a newly filed class-action lawsuit alleging deceptive privacy practices. The host delves into the details of the litigation, how Meta's AI training processes are raising privacy alarms, the broader implications for smart wearables, and what might come next for both the company and the industry.
[01:00 – 07:00]
"I think maybe people are concerned because these things have cameras on that. The cameras are viewable even while there’s not footage being recorded now. So I think just a lot of trust has been lost in the device."
[07:00 – 09:00]
"Can you imagine if every video you've recorded on your iPhone—maybe this is the case and I'm just unaware—but is sent overseas to be reviewed by someone? It would feel like a major invasion of your privacy."
[09:00 – 10:30]
"The lawsuit is basically focusing on how the glasses were marketed...there’s only so many slogans that they’re throwing around, they’re very prominent on their website. This is what everyone sees."
[10:30 – 11:30]
"Honestly, the whole using the data to train AI models, I would say, is an invasion of privacy. And we wonder why some of these AI models get so good and where they get their data from. It's just half the time, sneaky ways that you don't know companies are stealing your data."
[11:30 – 12:00]
“When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do.”
[12:00 – 12:30]
"If Meta is saying...you have like tons of privacy and you get control over your data, I think you, you definitely do not want your, your data being sent and viewed by other people. So...the way that they market it is going to have to change."
Loss of trust in smart device privacy:
"A lot of trust has been lost in the device for, you know, a lot of different things." [04:08]
Terms and consumer understanding:
"Basically every company, right, in the terms of service, these things are like 5,000 page documents. They could put anything in there that they want, no one's ever going to read them." [09:42]
On using personal data to train AI:
"Half the time, sneaky ways that you don't know companies are stealing your data. But, you know, that's another conversation." [11:15]
Meta’s defense of its AI data practices:
"When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do." – Christopher Sergo, Meta spokesperson [11:44]
This episode offers a comprehensive analysis of the class-action lawsuit against Meta over its Ray-Ban Smart Glasses, illustrating the growing tension between innovative AI-powered consumer devices and personal privacy rights. The host breaks down complex legal, technical, and ethical dimensions, spotlighting Meta's insufficient user disclosures, questionable marketing, and how human-in-the-loop AI practices can lead to unexpected privacy intrusions.
The takeaway is clear: while the technology is advanced and attractive, true informed consent and user control remain unresolved challenges as AI increasingly integrates into daily life. The outcome of this lawsuit—and similar cases—could shape the standards and transparency expectations for all companies operating in the burgeoning AI wearable space.