Transcript
Brian McCullough (0:04)
Welcome to the TechMe Write Home for Tuesday, May 20, 2025. I'm Brian McCullough. Today, all the headlines from the Microsoft Build 2025 keynote. A whole slew of legislation that relates to tech is making its way through Congress. Temu and Shein can't catch a break. Nvidia is serious about using AI for robotics and how something called social sensitivity can make autonomous cars even more safe. Here's what you missed today in the world of tech. They did this to me last year, I think as well. The Microsoft Build conference kicked off yesterday and Google I O is today, so it's hard to cover both at the same time without shortchanging one or the other. So we'll cover Build today and I O tomorrow. Build first Beginning with Microsoft owned GitHub debuting an AI coding agent for GitHub Copilot that can fix bugs, add features, improve documentation, and more. And they they are open sourcing GitHub copilot and VS code quoting the Verge to complete its work, GitHub says the AI coding agent will automatically boot a virtual machine, clone the repository, and analyze the code base. It also saves its changes as it works, while providing a rundown of its reasoning in session logs. When it's finished, GitHub says the agent will tag you for review. Developers can then leave comments that the agent will automatically address. The agent also incorporates context from related issue or PR pull request discussions and follows any custom repository instructions, allowing it to understand both the intent behind the task and the coding standards of the project. GitHub says the new coding agent is available to Copilot Enterprise and Copilot Pro+ users through GitHub site, its mobile app, and the GitHub command line interface tool. Microsoft also announced that it's open sourcing GitHub copilot in visual Studio code, which means developers will be able to build upon the tool's AI capabilities. End quote. But that wasn't the end of the open sourcing, because Microsoft also open sourced the Windows subsystem for Linux and released its code on GitHub except for a few Windows specific components. Microsoft also launched NL Web, an open project that lets developers add a conversational interface to their website with a few lines of code, an AI model and data. Quoting TechCrunch, you can use the AI model of your choice and your own data. A retailer could use NL Web to create a chatbot that helps users choose clothing for specific trips, for example, while a cooking site could use it to build a bot that suggests dishes to pair with a recipe. Web pages built using NL Web can optionally make their content discoverable and accessible to AI platforms that support mcp, Anthropic's standard for connecting AI models to the systems where data resides. We believe NL Web can play a similar role to HTML for the agentic Web, writes Microsoft in press materials provided to TechCrunch. It allows users to interact directly with the web content in a rich, semantic manner. Microsoft didn't say either way, but NL Web may have its origins in tech from ChatGPT maker OpenAI, Microsoft's close collaborator end quote Microsoft also expanded Entra, Defender and Purview, embedding them directly into Azure AI Foundry and Copilot Studio to help organizations secure AI apps and agents. For its Copilot Studio agents, Microsoft announced a computer use feature available in its Frontier program for select US users, WhatsApp integration, and more. They added Grok 3 and Grok 3 mini to their Azure AI Foundry service and sources say Satya Nadella had been pushing for Microsoft to host gr, Quoting the Verge It's a surprise addition that could prove controversial internally and further inflame tensions with Microsoft's partner OpenAI. Microsoft has been steadily growing its Azure AI Foundry business over the past year and has been quick to embrace models from a variety of AI labs that compete with its OpenAI partner. In January, I reported in Notepad that Microsoft CEO Satya Nadella had moved with haste to get engineers to test and deploy DeepSeek R1 as it made headlines around the world. Engineers didn't sleep much over those days while they worked overtime to get R1 ready for Azure AI Foundry. Sources tell me Adela has been pushing for Microsoft to host Grok, and he's eager for Microsoft to be seen as the hosting provider for any popular or emerging AI models. Grok is the latest model to join the Azure AI Foundry, which is quickly becoming an important AI service for Microsoft as it seeks to be seen as the platform to host AI models for businesses and app developers. Microsoft also debuted Foundry Local for Windows and macOS on ONNX runtime to let developers build cross platform AI apps that can run models, tools and agents on device. And finally, Microsoft announced Microsoft Discovery, an agentic platform to accelerate scientific discovery and enterprise R and D efforts. Quoting Neo Nguyen Microsoft Discovery doesn't lock researchers in with Microsoft's own tools. Instead, it has been built to be highly extensible, allowing researchers to integrate with models from other partners and even their own models, tools and data sets when required. Also, it is built on top of a graph based knowledge engine to provide a deep understanding of conflicting theories, diverse experimental results, and more. Researchers can also validate and understand every step or make changes if required. Microsoft says its own researchers use Microsoft Discovery's AI models and HPC simulation tools to discover a new coolant prototype to be used in data centers in around 200 hours, a process that Microsoft says would have taken months or years otherwise. Microsoft is already working with a select set of Microsoft customers from various industries including chemistry and materials, silicon design, energy manufacturing and pharma to develop Microsoft Discovery's capabilities. With its extensible architecture and AI capabilities, Microsoft Discovery promises to significantly shorten research timelines across various scientific fields. Back in February, Google announced a somewhat similar product called AI Co Scientist. It's a multi agent AI system built with Gemini, designed to serve as a virtual scientific collaborator that helps scientists generate novel hypotheses and research proposals. It will be interesting to see Microsoft and Google compete to win over researchers and scientists in the coming years. End Quote Foreign has signed the Take It down act, criminalizing the distribution of non consensual intimate images, basically deep fakes, and requiring platforms to promptly remove them when notified, Quoting the Verge the bill sailed through both chambers of Congress, with several tech companies, parent and youth advocates, and first lady Melania Trump championing the issue. But critics, including a group that's made its mission to combat the distribution of such images, warned that its approach could backfire and harm the very survivors it seeks to protect. The law makes publishing ncii, whether real or AI generated, criminally punishable by up to three years in prison plus fines. It also requires social media platforms to have processes to remove NCII within 48 hours of being notified and, quote, make reasonable efforts to remove any copies. The Federal Trade Commission is tasked with enforcing the law, and companies have a year to comply. Under any other administration, the Take It down act would likely see much of the pushback it does today by groups like the Electronic Frontier foundation and center for Democracy and Technology, which warn the takedown provisions could be used to remove or chill a wider array of content than intended, as well as threatened privacy protecting technologies like encryption, since services that use it would have no way of seeing or removing the messages between users. But actions by the Trump administration In his first 100 days in office, including breaching Supreme Court precedent by firing the two Democratic minority commissioners at the ftc, may have added another layer of fear for some of the law's critics who worry it could be used to threaten or stifle political opponents. Trump, after all, said during an address to Congress this year that once he signed the bill, quote, I'm going to use that bill for myself too, if you don't mind, because nobody gets treated worse than I do online. Nobody. End quote. The Cyber Civil Rights Initiative, which advocates for legislation combating image based abuse, has long pushed for the criminalization of non consensual distribution of intimate images, or ndi. But the CCRI said it could not support the Take It down act because it may ultimately provide survivors with, quote, false hope on blue sky. CCRI President Mary Ann Franks called the takedown provision a poison pill that will likely end up hurting victims more than it helps. Platforms that feel confident that they are unlikely to be targeted by the ftc, for example, platforms that are closely aligned with the current administration may feel emboldened to simply ignore reports of ndi, they wrote. Platforms attempting to identify authentic complaints may encounter a sea of false reports that could overwhelm their efforts and jeopardize their ability to operate rate at all, end quote. But that's not all on the legislative front, because the U.S. senate has advanced the genius act, a stablecoin bill, after a group of Democratic senators dropped their opposition to the bill, marking a major win for the crypto industry. Quoting Bloomberg, the industry backed regulatory bill is now set for debate on the Senate floor, with a bipartisan group hoping to pass it as soon as this week, although senators said a final vote could slip until after the Memorial Day recess. Democrats had united to filibuster the legislation earlier month amid a furor over President Donald Trump's crypto dealings along with other concerns related to the regulation of stablecoins. But The Senate voted 66 to 32 on Monday night to end the filibuster. Crypto friendly Democrats, led by Senators Kristen Gillibrand of New York and Angela Also Brooks of Maryland negotiated modifications to the legislation and urged their colleagues to support it even without a ban on Trump profiting from his family's many crypto ventures while in office. But Democratic Senator Mark Warner of Virginia, an influential moderate on the Banking Committee, announced Monday he would support the measure, adding that concerns over the Trump family's business dealings shouldn't sideline broader stablecoin legislation. The legislation is not perfect, but it's far better than the status quo, warner said. End quote. And one more bit of congressional news. The U.S. house Budget Committee has advanced a budget bill that would ban US states from enforcing any law regulating AI for 10 years. The bill now goes to the House. If you're a security or IT professional, you've got a mountain of assets to protect. You've got devices, applications and employee identities. Plus you've got the scary stuff outside your security stack like unmanaged devices, shadow IT apps and non employee identities. It's a lot. Fortunately, you can conquer these risks with 1Password Extended Access Management. For example, did you know most ransomware attacks stem from unmanaged devices? That's why 1Password Device Trust Solutions blocks unsecured and unknown devices before they access your company's apps. And don't worry, 1Password still protects against the biggest attack source compromised credentials. Its industry leading password manager helps employees create strong unique logins for every app. Secure devices check. Secure credentials check. But what About Employee Productivity? 1Password Extended Access Management empowers hybrid employees to join the security team with end user remediation that teaches them how and why to fix security issues without needing help from it. You know I love one password and use it every single day. You should too go to 1Password comm/ride to secure every app, device and identity, even the unmanaged ones. Right now my listeners get a free two week trial at 1Password.com Ride all lowercase. That's 1Password.com Ride. The best piece of money and investing advice I've ever gotten was to simply always do it. Always suck something away, even if the market is bumpy because being constant will smooth things out in the end. Today's episode is sponsored by Acorns. Acorns is a financial wellness app that makes it easy to start saving and investing for your future. You don't need to be rich. Acorns lets you get started with the spare money you've got right now. Even if all you've got is spare change, you don't need to be an expert. Acorns recommends a diverse, diversified portfolio that can help you weather all of the market's ups and downs. You just need to stick with it. And Acorns makes that easy too. Acorns automatically invests your money, giving it a chance to grow with time. Sign up now and join the over 14 million all time customers who have already saved and invested over $25 billion with Acorns. Head to acorns.com ride or download the Acorns app to get started. Paid non client endorsement compensation provides incentive to positively promote Acorns tier one compensation provided investing involves risk. Acorns Advisors LLC and SEC registered investment advisor. View important disclosures@acorns.com Ride the Hits keep coming for Temu and Shein. Sources and a document have revealed that the EU is planning to levy a flat €2 fee on billions of small packages that enter the block, mainly from China on a daily basis. So pretty negative for the business model of you know who. Quoting the FT the European Commission circulated a draft proposal on a handling fee on Monday after pressure from other member states whose customs authorities are overwhelmed by the 4.6 billion items annually imported directly to people's homes. The proposal seen by the FT does not set the fee level, but people familiar with the commission's thinking suggested it would be about €2. Some of the money would cover customs costs but also go into the EU budget, adding billions annually. Trade Commissioner Maros Sefkovich has promised to tackle the surge in packages, which he said had led to an increase in dangerous and non compliant goods and complaints by EU retailers of unfair competition. End quote Nvidia has unveiled Isaac Groot N1.5, an open customizable AI model for humanoid reasoning and skills, and also released Groot Dreams, a tool for generating synthetic motion data. Quoting Games Beat, Nvidia said it is racing ahead with humanoid robotics technology, providing a custom foundation model for humanoid reasoning, a blueprint for generating synthetic motion data and more. Blackwell systems to accelerate humanoid robot development at the Computex 2025 trade show in Taiwan, Nvidia unveiled Isaac Groot N1.5, the first update to Nvidia's open, generalized, fully customizable foundation model for humanoid reasoning and skills. Nvidia Isaac Groot Dreams is a blueprint for generating synthetic motion data and Nvidia Blackwell Systems to Accelera Humanoid robot development Humanoid and robotics developers Agility Robotics, Boston Dynamics, Fourier, Foxlink, Galbot, Manti Robotics, Neura Robotics, General Robotics, Skilled AI and XPENG Robotics are adopting Nvidia Isaac platform technologies to advance humanoid robot development and deployment. Showcased in CEO Jensen Huang's Computex keynote address, Nvidia Isaac Groot Dreams is a blueprint that helps generate vast amounts of synthetic motion data, AKA neural trajectories that physical AI developers can use to teach robots new behaviors, including how to adapt to changing environments. Developers can first post train Cosmos, predict world foundation models for their robot. Then, using a single image as the input, Groot Dreams generates videos of the robot performing new tasks in new environments. The blueprint then extracts action tokens, compressed digital pieces of data that are used to teach robots how to perform these new tasks. The Groot Dreams blueprint complements the Isaac Groot Mimic blueprint, which was released at the Nvidia GTC conference in March, while Groot Mimic uses the Nvidia Omniverse and Nvidia Cosmos platforms to augment existing data. Groot Dreams uses Cosmos to generate entirely new data, jim Fan, director of AI and distinguished Scientist at Nvidia, said in a press briefing. Nvidia has a very strong robotic strategy and it is centered around what Jensen calls the three computer problem, he noted. The firm has the OVX computer that is meant to do simulation and graphics, simulation physics engines and is used to synthesize and generate data and this data is consumed by the DGX computer which are used to train foundation models and then it is deployed to the HX computer which is the runtime on the edge for platforms like humanoid robots. Groot is the life cycle of physical, AI and robot based workflows, fan said. It is an instantiation of the three computer problem, he said. End quote. Finally today, this is interesting Autonomous vehicles trained to use what they're calling social sensitivity in assessing the collective impact of multiple hazards leads to fewer injuries during road accidents involving these autonomous vehicles. Quoting the ft Autonomous cars that are trained to respond more like humans to danger will cause fewer injuries during road accidents, according to a study that shows how driverless vehicles might be made safer. Vulnerable groups such as cyclists, pedestrians and motorcyclists saw the biggest gain in protection when driverless cars used to social sensitivity in assessing the collective impact of multiple hazards. The study, published in the U.S. proceedings of the National Academy of Sciences, highlights growing efforts to balance AV's efficient operation with the need for them to minimize damage and collisions. The issue of AV ethics is attracting increasing attention as growing use of the cars offers the prospect of eliminating driver problems such as spatial misjudgments and fatigue. The study suggests human behavioral methods could provide an effective scaffold for AVs to address future ethical challenges. Challenges said its China and US based authors led by Hongliang Liu of the Hong Kong University of Science and Technology. Based on social concern and human plausible cognitive encoding, we enable AVs to exhibit social sensitivity in ethical decision making, they said. Such social Sensitivity can help AVs better integrate into today's driving communities. Social sensitivity included being attuned like human drivers to the vulnerabilities of specific road users and being able to judge who was likelier to be more seriously hurt during a crash. The researchers drew on evidence from neuroscience and behavioral science that humans navigate using a cognitive map to interpret the world and adapt accordingly. The scientists base their instructions for the AV on a concept known as successor representation, which encodes predictions of how different elements in an environment will interrelate across time and space. They examine the results of harnessing their model to Ethical Planner, a system avs use to make decisions. Accounting for various risk considerations. The researchers modeled 2,000 benchmark scenarios, measuring total risk of each one by assessing the probability of collision and the likely severity of harm for the people involved. The science scientists found that using their human inspired model with Ethical Planner cut overall risks to all parties by 26.3% and by 22.9% for vulnerable road users. Compared with using Ethical Planner alone in crash scenarios, all road users suffered 17.6% less harm, rising to 51.7% for vulnerable users. The occupants of the AV were also better off experiencing 8.3% less harm. Today I learned that Jesse Armstrong, that dude who did the TV show Succession, has a movie coming out about tech billionaires. It's actually coming out at the end of the month on hbo. I guess it's called Mountainhead. Must see viewing for us, I guess right? Link to the trailer in the show notes today. Talk to you tomorrow.
