Transcript
Brian McCullough (0:04)
Welcome to the tech meme. Right home for Wednesday, July 16, 2025. I'm Brian McCullough. Today Jensen is feeling his oats after the reprieve on China, spilling lots of tea about where he sees the AI industry. OpenAI is going after the office and also the storefront with interesting new integrations. Why is it so hard to create LLMs in other languages and a first person account of what it's like to actually work at OpenAI? The culture, the pressure, et cetera. Here's what you miss today in the world of tech. After seemingly winning back that huge China business, Jensen Huang is feeling his oats. He's running around giving wide ranging sort of interviews and comments on the state of the AI industry. Quoting Bloomberg, Huang is touring Beijing just days after meeting with US President Donald Trump, who he said wished him a great trip. The Nvidia CEO turned on the charm during high profile events from the summit to widely covered meetings with the likes of Xiaomi's Lei Jun and Vice Premier He Li Feng, the key negotiator for China in trade discussions. At that meeting, they discussed Beijing's commitment to remain open to foreign investment, Huang said. Huang made it a point to wow his audience, donning a modernized Tang jacket, posing with Lai, locally celebrated for creating a smartphone to compete with Apple on social media, lauding household names from Tencent to Alibaba to ByteDance. Models like Deepseek, Alibaba, Tencent Minimax and Baidu Erniebot are world class, huang told conference attendees. China's open source AI is a catalyst for global progress, giving every country and industry a chance to join the AI revolution. End quote and quoting cnbc, Wang on Wednesday also praised Chinese companies for taking an open source approach to AI, meaning developers can access the underlying code for free. Notably, OpenAI in the US has not yet taken this approach. Alibaba backed startup Moonshot last week released a new open source model called Kimik2 that claims to beat OpenAI's ChatGPT and Anthropic's Claude on certain coding metrics. Huang added that open source technology is also key for AI safety and enables international cooperation on standards. Wang also described how AI powers China's consumer tech such as Tencent's WeChat social media app, Alibaba's Taobao shopping app, ByteDance's Douyin short video app and Meituan's super convenient Delivery, end quote and quoting the Times about the regulatory about face that seemingly has put wind in his sails. Quote I don't think I changed his mind, Mr. Wong said of Mr. Trump. It's my job to inform the president about what I know very well, which is the technology, industry, artificial intelligence, the developments of AI around the World Nvidia has found itself facing Chinese export controls around a rare earth metal called dysprosium that it uses in many of its chips but in small quantities. Beijing put controls on the metal, which is refined almost exclusively in China, in April, but Mr. Wang said he hadn't discussed that issue with Chinese officials in their meetings this week and suggested that enough dysprosium remains available for Nvidia's needs. The volume we use is not that high in the grand scheme of things. I think the amount of overall inventory around the world is sufficient for us, he said. Asked whether he had discussed China's rare earth or battery technology restrictions on Wednesday with Chinese officials, Mr. Wang replied with a laconic no, end quote. Huang also said Nvidia will, quote, accelerate the recovery of its China chip sales as he expects its U.S. export licenses to come through very shortly, though also worth noting that U.S. commerce Secretary Howard Lutnick did admit Nvidia's plan to resume its H20AI chip sales to China is part of U.S. trade negotiations over rare earths and magnets. We put that in the trade deal with the magnets, lutnick told Reuters, referring to an agreement President Trump made to restart rare earth shipments to US Manufacturers. He did not provide additional detail, so there are still factors here. Outside of Jensen's control. OpenAI is not standing still. Two interesting new initiatives from them first, sources say OpenAI is preparing ChatGPT agents to let users create files compatible with things like PowerPoint or Excel, allow them to generate reports and handle tasks involving websites, the better to go after enterprise users, right? Quoting the information, the company has designed buttons below the ChatGPT search bar that direct users to create a spreadsheet or presentation. The person said it isn't clear when OpenAI plans to to release the features, but they imply that ChatGPT customers would also be able to download and open these Excel and PowerPoint files using a variety of applications made by companies other than Microsoft. Anyone can create files that are compatible with Excel and PowerPoint, because Microsoft has made the formats for those files Open Source, so OpenAI doesn't need Microsoft's permission to do so. The new ChatGPT agents will also help customers generate reports based on corporate or public data, or handle repetitive tasks involving websites such as scheduling and booking appointments. Taken together, these productivity features could make ChatGPT even more attractive as a business tool, posing a threat to productivity suites sold by the likes of Microsoft and Google. Ironically, ChatGPT runs almost entirely on Microsoft servers due to the company's financial arrangements. Hundreds of millions of people use ChatGPT, including tens of millions of paying subscribers, and OpenAI wants to make the app a gateway through which consumers and enterprises use online services or get work done, end quote. And maybe just as interestingly, the FT says OpenAI is aiming to add a checkout system inside of ChatGPT to ensure users complete transactions within the platform, with merchants paying a commission. The San Francisco based company currently displays products on the platform, with an option to click through links to online retailers. It also announced a partnership with payments group Shopify in April, according to multiple people familiar with the proposals. It now aims to integrate a checkout system into ChatGPT, which ensures users complete transactions within the platform. Merchants that receive and fulfill orders in this way will pay a Commission to OpenAI. The e Commerce push marks a strategic shift for the loss making startup valued at $300 billion, which has made revenue primarily from subscriptions to premium services. Taking a cut of sales from ChatGPT would allow the company to make money from users of its free version, a so far untapped source of revenue. OpenAI's move also represents a further threat to Google's business model, as consumers increasingly move to AI chatbots to conduct searches and discover products. The feature is still in, so the details may change. However, OpenAI and partners such as Shopify have been presenting early versions to brands and discussing financial terms, these people added. Shopify offers checkout technology that can be integrated into other online services. It already works with social media platforms, for instance, underpinning TikTok's shopping feature. ChatGPT's product recommendations are currently generated based on whether they are relevant to the user's query and other available context, such as memory or instructions, like a specified budget. OpenAI has recently enhanced its memory, which allows the model to remember user preferences and provide more personalized responses. However, when a user clicks on a product, OpenAI may show a list of merchants offering it, according to its website. This list is generated based on merchant and product metadata we receive from third party providers. Currently, the order in which we display merchants is predominantly determined by these providers, it adds. OpenAI does not factor in price or shipping into these merchant options, but expects this to evolve as we continue to improve the shopping experience. Brands and advertising agencies have been experimenting with how to promote their products in chatbot search results, for example, by posting content they believe will be more likely to be picked up by the models. The practice, similar to so called search engine optimization or SEO, has become known in the industry as aio. It starts to pose big and difficult questions around what preferences AI shows in its results, one advertising chief executive said. This can potentially destroy the idea of paid search via traditional platforms and also of course disintermediate the way advertising agencies operate today. As recently as December, OpenAI, which is currently restructuring into a for profit company, said it had no active plans to pursue advertising End quote One of the things that is percolating in the background with AI, at least in this LLM moment, is actually a very cultural thing, basically unique to localize models to specific regions and cultures. For best results, Rest of World has a look at the Chile led Latam GPT project, which involves more than 30 Latin American and Caribbean institutions collaborating to release an open source LLM in September. Quote While large language models including GPT and Metas LAMA are trained on a wide range of data in languages other than English, their capability in those languages remains limited, particularly in dialects and local idioms. To address these shortcomings, which have led to inaccuracies and hallucinations or fabrications, a group of over 30 institutions across Latin America has spent the last two years developing LATAM GPT. The Chile led Latam GPT project is building AI in Latin America for Latin Americans, Hector Bravo, lead of Disruptive Technologies at sanda, a Chilean IT firm that is not involved in the project, told Rest of World. It means redefining success metrics not just accuracy or speed, but cultural representation, social impact and accessibility. Latam GPT is being designed for deep multilingualism and includes indigenous languages such as Nahuato, Quecha and Mapudangan, as well as dialect variants, including some from the Caribbean, said Bravo. Latin America is following the lead of other regions. Southeast Asia's sea lion is a family of open source LLMs trained in nearly a dozen regional languages. Besides English, in Africa, users can interact with Ulisa Lama in at least five different languages, including Socia and Zulu, while in India, Bharat GPT supports over 14 regional languages, with the government recently announcing that it was building its own LLM as well. Latin America has been slow to adopt AI. It is beginning to catch up, however, with Chile leading in terms of regulation and institutional development. According to the Atlas of Artificial Intelligence for Latin America and The Caribbean, a 2025 report from the United Nations Development Program, Chile's national center for artificial intelligence, was founded in 2021. The idea for LATAM GPT emerged shortly after. Although LLMs such as GPT and LLAMA 2 support multilingual capabilities, including Spanish, many of the data sets they are trained on are from Spain or translated from text originally written in English, limiting their ability to understand cultural and linguistic nuances. Latam GPT, which is being trained with data from schools, businesses, libraries and historical texts, helps the model better understand the context and needs of Latin American users. Omar Flores, LATAM GPT's technical lead for pre training, told Rest of world there is increasing demand for generative AI platforms in the region. Brazil has the highest number of users of ChatGPT after the US and India, according to Demandsage, a sales analytics platform, and LLAMA downloads have also surged in Latin America. Teachers and students use them in classrooms while business owners turn to them to offer customer support. Even government offices employ them to reduce processing times. In Buenos Aires, for example, the courts use ChatGPT to draft legal decisions. Clearly, the resources behind ChatGPT dwarf those of LATAM GPT, which will be text only for the foreseeable future. It will also lag behind on general questions and those not related to Latin America, said Soto. Latam GPT requires ultra high capacity infrastructure, specialized talent and relevant data sets, three areas where gaps still exist in the region, carlos Honorado, chief executive officer of Orion, a Chilean AI company, told Rest of World. End quote While single AI agents can handle specific tasks, the real power comes when specialized agents collaborate to solve complex problems. There is, however, a fundamental gap. We have no standardized infrastructure for these agents to discover, communicate with and work alongside each other. That's where Agency agntcy comes in. The agency is an open source collective building the Internet of Agents, a global collaboration layer where AI agents can work together. It will connect systems across vendors and frameworks, solving the biggest problems of discovery, interoperability and scalability for enterprises. With contributors like Cisco, Crewai, LangChain and mongodb, Agency is breaking down silos and building the Future of Interoperable AI Shape the Future of Enterprise Innovation? Visit agency.org to explore use cases now that's a G N T C Y.org finally today, something that has gotten a lot of chatter online overnight is this post from Calvin French owen, a former OpenAI engineer who detailed his experience working at OpenAI, including its culture, engineering practices, rapid growth, the launch of Codex, their coding platform. He said that when he joined, OpenAI was just over a thousand people, but by the time he left it had tripled in size. That kind of growth creates a lot of strain, obviously. Communication, hiring, shipping, products even how teams organize. And apparently the culture at OpenAI isn't uniform. Some teams run flat out sprints, others move steadily and research applied work and go to market. All operate on different time horizons. A surprising detail was that almost everything happens on Slack French Owens said. They got maybe 10 emails their whole time there. It can be overwhelming if you're not careful, but if you curate your Slack channels, apparently it works. Culturally, the company is incredibly bottoms up early. He asked what the next quarter's roadmap looked like and was told there isn't one. It's very meritocratic. Apparently ideas matter more than politics. Leaders tend to rise because they consistently deliver, not because they're great at all hands or internal maneuvering. And there's a bias toward action. People build prototypes without asking permission, and if something shows promise, teams form around it. Researchers are treated like mini executives who chase what interests them. He emphasized how quickly OpenAI pivots. Even at its size, the company will abandon a plan overnight if new data suggests a better direction. At the same time, the scrutiny is intense. He mentioned seeing press stories before internal announcements and said secrecy is taken seriously even internally. Access is restricted, but under that secrecy is a strong sense of mission. The stakes feel enormous to OpenAI ers. Apparently they really believe in the fact that they might be building AGI serving hundreds of millions of users, competing against giants like Google and peers like Anthrapic, all under the watch of governments and the public. He liked how OpenAI shares its models. Cutting edge tools aren't locked behind enterprise contracts. Anyone can try ChatGPT or sign up for the API. That openness, he said, is still core to the company's DNA. Safety work is another area they highlighted. Contrary to some online speculation, a lot of people there focus on practical risks like abuse, misinformation or prompt injection. More theoretical risks are also considered, but less of a day to day focus they are. They also dove into the engineering side. OpenAI uses a giant Python Mono repo with some rust and go and code quality varies wildly. Production grade systems sit alongside quick jupyter notebook experiments. Everything runs on Azure, though. A lot of in house tools fill gaps where Azure is weaker. He noted a strong influx of former Meta engineers and much of the infrastructure feels, according to him, inspired by Meta's early days. Decisions are made by whoever is willing to do the work, so duplicate systems do exist, but the velocity is impressive. The highlight of his time there was the Codex launch. After returning from paternity leave, he joined a frantic seven week sprint that merged two teams and built a coding agent from scratch, container runtime, git integrations, fine tuned models, a new interface. They worked late nights, early mornings and weekends. The night before launch, a small group stayed up until 4am deploying, then showing up for the 8am launch and watched traffic pour in. They said it was the hardest they'd worked in a decade and one of the most rewarding projects of their career. He closed by reflecting on why he joined in the first place. He wanted to understand how models are built, to work alongside brilliant people and to ship something meaningful, in his words. All three boxes were checked. His advice to founders was interesting. If your own startup feels stalled, either take bigger swings or consider joining one of the major labs so some late night insomnia induced reading learnings for you? Sort of Tech History I'm reading a book about the history of the horse as a technology and its impact on history. Couple things I learned first, did you ever wonder why horses tend to pull things like carts and carriages and stagecoaches in pairs? Like historically you'd see two or four or more horses in front of a stagecoach, but rarely just one. It's because horses are such social animals it's hard to convince one singular horse to pull something, but if it senses another horse riding alongside will be more likely to be like, I guess this is what we're doing. If you only have one thing to pull a thing. Historically people use oxen instead, not horses. Oxen are apparently willing to do solo work. The other thing I learned is that the chariot actually came hundreds of years before people started actually riding horses. So the, you know, Egyptian chariot armies came centuries before the like Huns and the Calvary of Alexander the Great. Interesting that the cart literally came before the horse. The book is called Raiders, Rulers and the Horse and the Rise of Empires by the way, and it's very readable actually. So if you're curious, link to that in the bottom of the show notes. Talk to you tomorrow.
