Techmeme Ride Home: Thu. 06/26 – Conflicting AI Legal Rulings
Host: Brian McCullough
Conflicting AI Legal Rulings
Introduction to the Legal Battle Over AI Training Data
In today’s episode, Brian McCullough delves into the ongoing legal disputes surrounding the use of copyrighted materials in training artificial intelligence (AI) models. The tech industry is currently witnessing contradictory court rulings that leave the legal landscape for AI development uncertain.
Microsoft Faces Lawsuit Over Pirated Books
A significant development is the lawsuit filed by a group of authors against Microsoft in a New York federal court. The plaintiffs accuse Microsoft of using nearly 200,000 pirated books without permission to train its Megatron AI model. As McCullough explains:
"A group of authors is suing Microsoft in a New York federal court, claiming the company used nearly 200,000 pirated books without permission to train its Megatron AI model."
(00:04)
The authors argue that Microsoft’s actions not only violate copyright laws but also undermine the creative works of thousands of authors by enabling AI to mimic their styles and themes. This lawsuit is part of a broader trend, with other major tech companies like Meta Platforms and Anthropic also facing similar accusations.
Contrasting Rulings: Meta vs. Anthropic
Interestingly, the legal proceedings have yielded inconsistent outcomes. While Microsoft is currently battling allegations of copyright infringement, another case sees Meta’s use of books for training its Llama AI model being deemed protected by fair use. Brian cites a Bloomberg report highlighting the judge’s stance:
"The judge said it stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one."
(00:04)
This decision does not establish a blanket permission for using copyrighted materials but rather critiques the plaintiffs' arguments. In contrast, a recent ruling against Anthropic by Judge William Alsup favored the plaintiffs, emphasizing the potential market harm caused by AI models trained on copyrighted works:
"It's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of potentially endless streams of competing works that could significantly harm the market for those books."
(00:04)
These conflicting rulings exemplify the ongoing debate and the lack of clear legal guidelines governing AI training practices.
Trump Mobile’s “Made in USA” Smartphone Controversy
Ambiguity Surrounding Manufacturing Claims
Another topic Brian touches on is the recent announcement by the Trump Organization regarding its Trump Mobile smartphone. The company initially claimed the phone was “made in America,” sparking skepticism about the feasibility of manufacturing entirely within the United States.
"Last week, part of the announcement was that it was, quote, made in America. Which got a lot of people to say how exactly, seeing as how no one else can seemingly make a smartphone entirely in the U.S."
(00:04)
In response to the backlash, Trump Mobile updated its messaging to state the device was “brought to life right here in the USA” and “proudly American.” A spokesperson for the company dismissed contrary reports:
"Chris Walker, a Trump Mobile spokesperson, dismissed the report saying that T1 phones are proudly being made in America."
(00:04)
This shift in language aims to emphasize domestic production, though questions remain about the actual extent of manufacturing within the U.S.
Creative Commons Launches CC Signals
Balancing Open Access and AI Training Needs
Creative Commons has introduced CC Signals, a new framework designed to empower dataset holders to specify how their content can be reused by machines, particularly for AI training purposes. Brian highlights the initiative’s objective:
"The CC Signals project aims to provide a legal and technical solution that would provide a framework for data set sharing meant to be used between those who control the data and those who use it to train AI."
(00:04)
This framework seeks to maintain the openness of the internet while addressing the increasing demand for data to fuel AI advancements. By allowing creators to delineate permissions and restrictions explicitly, CC Signals aims to prevent the erosion of online openness through uncontrolled data scraping and AI training.
Industry Response and Future Plans
As companies grapple with modifying their data policies, CC Signals offers a standardized approach to data licensing. Early designs are available on the Creative Commons website and GitHub, with an alpha launch and public feedback sessions planned for November 2025.
Microsoft and OpenAI’s AGI Clause Dispute
Tensions Over Artificial General Intelligence (AGI) Definitions
A significant point of contention between Microsoft and OpenAI revolves around the AGI clause in their partnership agreement. Microsoft is currently pushing to remove this clause, which allows OpenAI to limit Microsoft's access to its intellectual property if its systems achieve AGI—defined by OpenAI as highly autonomous systems outperforming humans in most economically valuable work.
"Microsoft Chief Executive Satya Nadella has expressed skepticism that reaching such a benchmark is even possible."
(00:04)
OpenAI executives, including CEO Sam Altman, believe they are on the verge of declaring their AI tools as AGI. However, Microsoft's Nadella remains unconvinced, referring to certain AGI milestones as “nonsensical benchmark hacking.”
Contractual Implications and Future Negotiations
The disagreement over the AGI clause has led to intense negotiations, with Microsoft seeking either the removal of the clause or exclusive access to OpenAI’s IP post-AGI declaration. The clause's existence restricts Microsoft from developing AGI independently until 2030, and its removal could lead to a prolonged legal battle if OpenAI declares AGI without mutual agreement.
Deepseek’s R2 Model Delay Due to Nvidia Chip Shortage
Supply Chain Challenges Hindering AI Model Deployment
Deepseek, a venture backed by High Flyer Capital Management, faces delays in releasing its highly anticipated R2 AI model. The primary obstacle is the shortage of Nvidia H20 server chips in China, exacerbated by recent U.S. export bans.
"Deepseek's highly anticipated R2 model faces delays due to a shortage of Nvidia server chips in China, exacerbated by the US ban on Nvidia's H20 chips."
(00:04)
Despite intensive development efforts, CEO Liang Wen Feng remains unsatisfied with R2's performance, leading to further refinements before its official release. The scarcity of H20 chips poses additional challenges, limiting the model’s deployment capabilities within Chinese cloud infrastructures and potentially impacting global availability.
Future Prospects and Market Demand
Should R2 surpass existing open-source models, demand is expected to outstrip the already strained supply of Nvidia chips, further complicating cloud providers' ability to support widespread usage. Deepseek is currently coordinating with Chinese cloud companies to navigate these hardware limitations, aiming to optimize R2’s performance within the constrained supply environment.
AI’s Impact on Jobs at Tech Companies: Salesforce vs. Industry Trends
Salesforce Embraces AI to Augment Workforce
Marc Benioff, CEO of Salesforce, shares a positive outlook on AI integration within his company. Salesforce employs AI to handle 30-50% of its work, including roles in software engineering and customer service.
"AI is doing 30 to 50% of the work at Salesforce now, including software engineering and customer service."
(00:04)
Benioff emphasizes that AI enables employees to focus on higher-value tasks:
"All of us have to get our head around this idea that AI can do things that before we were doing. We can move on to do higher value work."
(00:04)
Salesforce is also promoting an AI product that manages customer service tasks with impressive accuracy, reportedly achieving 93% accuracy in handling large clients like Walt Disney.
Contrasting Experiences at Other Tech Firms
However, not all tech companies share Salesforce’s optimistic view. Reports from the "Blood in the Machine" substack highlight negative experiences at firms like Google, TikTok, Adobe, Dropbox, and CrowdStrike. Employees describe how AI tools have been used to justify layoffs and increase workloads without corresponding benefits.
A CrowdStrike employee recounts:
"500 colleagues, including recent grads who uprooted their lives to move to Texas are out of work. Those of us who remain are under pressure to do more with less longer hours, heavier workloads, no extra compensation."
(00:04)
The integration of AI tools has led to increased complexity and, in some cases, misinformation being disseminated to customers, damaging trust and credibility. This stark contrast underscores the varied impact of AI adoption across the tech industry, highlighting both the potential benefits and challenges.
Conclusion
Today's episode of Techmeme Ride Home sheds light on the multifaceted implications of AI advancements, from legal battles over data usage to the transformative effects on the workforce. As the tech landscape continues to evolve, the balance between innovation, ethical considerations, and regulatory frameworks remains a pivotal area of focus.
Stay tuned for more updates in tomorrow’s episode.
Notable Quotes:
- "A group of authors is suing Microsoft... using nearly 200,000 pirated books without permission to train its Megatron AI model." — Brian McCullough (00:04)
- "AI is doing 30 to 50% of the work at Salesforce now." — Marc Benioff, Salesforce CEO (00:04)
- "It's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions... significantly harm the market for those books." — Judge William Alsup (00:04)
- "Executive Vice President Eric Trump also alluded to the device being manufactured in the US... made right here in the United States of America." — Eric Trump (00:04)
This summary is based on the transcript provided and aims to encapsulate the key discussions and insights from the June 26, 2025, episode of Techmeme Ride Home.
