B (37:17)
as if I need more caffeine. And speaking of online web based services, there is apparently been some concern, I would say justified, you know, if you want to follow the rules over the intersection of child's privacy enforcement and the apparent explicit need to violate that very privacy for the sake of complying with legislated age determination. Last Wednesday, on the heels of Apple's begrudging update to their age related APIs and their download, you know, their app Download enforcement, the U.S. federal Trade Commission, our FTC issued a formal policy statement with the headline FTC Issues COPPA Policy Statement to incentivize the Use of Age Verification Technologies to Protect Children Online. They wrote. The Federal Trade Commission issued a policy statement today announcing that the Commission will not bring an enforcement action. I don't know if I would call that incentivizing. It's like de. Threatenizing will not bring an enforcement action under the Child's Online Privacy Protection Rule. Coppa. COPPA against website and online service operators that collect, use and disclose personal information for the sole purpose of of determining a user's age via age verification technologies. The COPPA rule requires operators of commercial websites or online services directed to children under 13 and operators with actual knowledge that they are collecting personal information from a child to provide notice of their information practices to parents and to obtain verifiable parental consent before collecting, using or disclosing personal information collected from a child under 13. And what a pain in the butt it is to actually do that, right? So we see the problem here, right? The emerging age restriction regulations are placing the burden upon online services to, you know, to whatever they must do to determine their visitors ages. But doing this could force the site to run afoul of other regulations, specifically coppa, which are already in place to protect the privacy of their underage visitors and users. In this instance, it's necessary to carve out an explicit privacy exception so that online services will be able to collect the data that they must without fear of tripping over coppa's restrictions. So the FTC explains, age verification technologies play a critical role in helping parents as they monitor their child's online activities. Since COPPA was enacted in 1998, so it's been around for a while, there's been an explosion in the use of Internet connected technologies by children to help parents navigate the challenges associated with their child's online activities. Some states have started requiring some websites and online services to use age verification mechanisms to help determine the age of users. But as noted at the FTC's recent workshop on Age Verification Technologies, some age verification technologies may require the collection of personal information from children, prompting questions about whether such activities could violate the COPA rule. Christopher Mufari, director of the FTC's Bureau of Consumer Protection, said, quote, age verification technologies are some of the most child protective technologies to emerge in decades. Our statement incentivizes operators to use these innovative tools. Again, I would say, you know, doesn't, you know, suspends disincentivizing them because that's the threat of being of of action under COPA that is causing them to say, wait a minute, which empowers parents to protect their children online, unquote. The policy statement states that the Commission will not bring this is the the statement from the FTC will not bring an enforcement action under COPPA rule against operators of general audience sites and services and mixed audience sites, services that collect, use or disclose personal information for the sole purpose of determining a user's age without first obtaining verifiable parental consent if they comply with certain conditions specifically that they and we've got six bullet points do not use or disclose information collected for age verification purposes for for any purpose except to determine a user's age. Two, do not retain this information longer than necessary to fulfill the age verification purposes and delete such information promptly thereafter. Three, disclose information collected for age verification purposes only to those third parties. The operator has taken reasonable steps and here again, I hate that kind of language. But okay, so to determine are capable of maintaining the confidentiality, security and integrity of the information, including by obtaining certain written assurances from those third parties. Okay, so at least transferring responsibility, hopefully legally enforceable. Fourth, provide clear notice to parents and children of the information collected for age verification purposes. Fifth, employ reasonable security safeguards for information collected for age verification purposes. And finally, sixth, take reasonable steps to determine that any product, service, method or third party utilized for age verification purposes is likely to provide reasonably accurate results as to the user's age. Again, does that mean, you know, facial recognition, which we know is really prone to error, whatever. Finally, they say the policy statement indicates that the Commission intends to initiate a review of the COPPA rule to address age verification mechanisms. The policy statement will remain effective until the Commission publishes final rule amendments on this issue in the Federal Register or until otherwise withdrawn. Okay, so this policy statement is intended essentially to provide interim cover for online sites and services that do need to enforce privacy breaching age restriction measures today which would otherwise expose the site to COPPA infringement. This suggests that COPPA itself, as they said here toward the end of this FTC announcement, COPA itself will require amending to provide a permanent and clear path for privacy respecting age verification for minors. So, again, well, one piece of legislation colliding with another Surprise. The Guardian reports that Metas CSAM detection AI is flooding law enforcement with low quality unactionable, which is, as we'll see here, it's like, it's really sad false positive reports of online child sexual abuse that are seriously hampering law enforcement's ability to function. Under the Guardian's headline, Meta's AI Sending Junk Tips to doj, US Child Abuse investigators say. Here's what the Guardian reported. They said officers from the US Internet Crimes Against Children ICAC Task Force said that Meta's use of artificial intelligence to moderate its social media platforms is generating large volumes of useless reports about cases of child sexual abuse which are draining resources and hindering investigations. Benjamin Zweibel, a special agent with the ICAC Task Force in New Mexico, said last week during his testimony in the state's trial against Meta. So this is New Mexico versus Meta, he said, quote, we get a lot of tips from Meta that are just junk. The state's attorney general alleges the company's platforms are putting profits over child safety. Okay, now at first I have to say I'll take a break here from this to say I was puzzled by that. But what I believe New Mexico's attorney general is saying is that rather than employing humans who would be able to use, you know, usefully discriminate between what is and is not actual child exploitation and abuse, Meta is endeavoring, they allege, to save money by using AI, which is not actually doing the job. So Meta is failing in their obligation, but they're failing in a way that's causing lots of trouble. The report continues saying Meta disputes these allegations, citing changes it has introduced on its platforms, such as teen accounts with default protections. The ICAC task Force is a nationwide network of law enforcement agencies coordinated with the U.S. department of justice to investigate and prosecute online child exploitation and abuse cases. Another ICAC officer, speaking on the condition of anonymity to discuss internal matters, said, quote, meta is providing thousands of tips each month. It's pretty overwhelming because we're getting so many reports, but the quality of the reports is really lacking in terms of our ability to take serious action. The ICAC officer added that the total number of cyber tips their department had received doubled from 2024 to 2025. Both Zwible and two ICAC officers said that unviable tips from Instagram, Facebook and WhatsApp often contain information that's not criminal. The anonymous officers added that in other cases, tips sometimes contain information indicating that a crime may have occurred, yet vital images, videos or text are missing or redacted. The ICAC officer added unviable tips from Instagram have really skyrocketed recently, especially in the last couple of months, and that's one of the biggest places where we're seeing important information not being provided in those cases. He said, we don't have the information to further the investigation. It weighs on you to know that this crime occurred, but we can't identify the perpetrator, unquote. So just to clarify that point, you know, these investigators are saying that what they see are clearly crimes which Meta's use of AI happened to have found. So not a false positive, it's true, but that the evidence that's needed to take any action about it is missing, which would not normally be the case if it were a human driven investigation. So Meta's use of AI is not only flooding law enforcement with crap, but it's also serving to obscure the necessary details of actual crimes it detects. You know, if we didn't know better, we'd be inclined to think this had been deliberately designed by criminals for criminals. It wasn't. I'm not suggesting that, but it's having that effect, right? The story says. Asked about Zweibel's testimony and the ICAC officer's remarks, a Meta spokesperson said, quote, we've supported law enforcement to prosecute criminals for years. The DOJ has repeatedly praised our fast cooperation that has helped lead to arrests. And NCMEC has praised our streamlined and improved TIP reporting process. In 2024, we received over 9,000 emergency requests from US authorities and resolved them within an average of 67 minutes. And even more quickly for cases involving child safety and suicide consistent with applicable law. We've reported a parent child sexual exploitation imagery to NCMEC and support them to prioritize, to prioritize reports from helping build their case management tool to labeling cyber tips so they know which are urgent. Okay, so I'll just note that while this sounds great, it doesn't appear to be responsive to the question of AI's use. That meta spokesperson appears to be referring to the work of humans employed by Meta, not their cost saving AI. The Guardian's reporting then shifts gears to provide some background on ncmec, which is the national center for Missing and Exploited Children. The Guardian writes, by Law Social media companies based in the United States are required to report any detected child sexual abuse material CSAM on their platforms to the national center for Missing and Exploited Children, ncmec. It serves as a national clearinghouse for reports which it forwards to the appropriate law enforcement agencies across the United States and internationally. NCMEC does not have the authority to filter out any tips that may be unviable before they're sent to the relevant law enforcement agencies. So 100 has to flow through Meta is by far the largest reporter to NCMEC and its Data report for 2024. NCMEC said Meta made 13.8 million reports across Facebook, Instagram and WhatsApp. Okay, so you know 13.8 million, right? We have 12 months in a year. So simple math tells us that's over a million reports per month is coming from Facebook, Instagram and WhatsApp. And that that that 13.8 million is out of a total of 20.5 million tips that NCMEC received in total. So you know well over half NCMEC and NCMEC said that in 2024 more than 1 million cyber tip line reports were linkable to a specific US state and those reports were made available to the ICAC task forces around the country as well as other federal, state, local law enforcement agencies for investigation. Meta and other social media companies use AI to detect and report suspicious material on their sites and employ human moderators to review some of the flagged content before sending it to law enforcement. The Guardian has previously reported that tips generated by AI that have not also been reviewed by a social media company employee or often cannot be opened by a law enforcement officer without a warrant because of Fourth Amendment protections. This extra step also shows investigations of slows investigations of potential crimes, lawyers involved in such cases have said and met. A spokesperson said it's unfortunate that court rulings have increased the burden on law enforcement by requiring search warrants to open identical copies of content. We've already reviewed and reported. Our image matching system finds copies of known child exploitation at scale that would be impossible to do manually and we work to detect new child exploitation content through technology, reports from our community and investigations by our specialist child safety teams under the Report act, where report is an acronym for Revising Existing Procedures on reporting via technology. So report, which came into force in November 2024. Online service providers must broaden and strengthen their reporting obligations by notifying NCMEC Cyber Tip Line not only about child sexual abuse material, but also about planned or imminent abuse, child sex trafficking and related exploitation. They must Also preserve evidence for a longer period and face higher penalties if they knowingly fail to comply. Excuse me. Since the act passed, the number of unviable tips supplied by META has increased dramatically, which should be. Excuse me. Which could be because the company is acting to ensure it is not falling foul of the law, two ICAC officers said. So in other words, META is, is, is complying because they're being forced to comply. The result, however, is a lot more noise among the signal. They said many of these tips could not be construed as a crime, such as adolescent girls talking about which celebrity they find most attractive. Special Agent Benjamin Zweibel said in court, quote, based on my training and experience, it appears that they are being submitted through the use of AI, as these are common mistakes that an AI would make that a human observer would not. Swible added that his department receives significantly fewer tips on legitimate cases of child sexual abuse material distribution from META than in previous years. So, in other words, not only has the noise gone up, but the signal, the quality, has gone down. Every tip that reaches an ICAC division must be reviewed and the influx of unviable tips is taking time and resources away from investigating legitimate cases of child abuse, said two officers. One ICAC officer said, quote, it's killing morale. We're drowning in tips and we want to get out there and do this work. We don't have the personnel to sustain that. There's no way that we can keep up with the flood that's now coming in, unquote. So I want to chalk this up less to Meta being evil, which I don't think is the case, than to the growing pains of effective AI deployment. We're still very much learning how to best use the new and surprising capabilities of large language model networks. And I suspect that a strong case could be made for there truly being far too much content for humans to manually inspect. In other words, you know, and we've talked about this, right? With the legislation that the UK keeps circulating and trying to make happen, where it's just like, you know, how are we going to do this? Apple has proposed doing on device CSAM image comparison, and nobody wanted that. I mean, they're the. The actual volume of content is beyond human management. So, you know, although the specter of having overlord AIs examining everything, excuse me, examining everything that's transacted over social media, you know, it feels very Orwellian. Our legislators are requiring a level of oversight from social media companies that likely has no other workable solution. It's, you know, AI it will be. We just need to continue figuring out how to best use it. And again, all evidence is we're, we're making headway and we're going to get a lot better than we are. You know, we, we can clearly see how much better we are now in using AI for code than we were a couple years ago. I, I, you know, this is going to get better and I, and I don't, I think we're just gonna, in the future, the legislators are going to force it to be the case that, that some machine intelligence, it's going to be watching dialogues and we're just, you know, users are going to have to put up with that as a, as a cost of the privilege of being able to communicate with encryption. I just saw a short mention blurb that surprised me. The news was just that wonderful that Russia's wonderfully named Internet watchdog Ross Cubnadzor has now blocked Russian citizens access to, you're not going to believe how many. 469 LEO individual VPN services, of course, inside Russia.