Loading summary
Ryan Reynolds
I'm ready for my life to change.
Ryan Seacrest
ABC Sunday, American Idol returns. Give it your all. Good luck. Come out with a golden ticket. Let's hear it.
Charla Gartenberg
This is immense world.
Isabel Bousquet
I've never seen anything like it.
Ryan Seacrest
And a new chapter begins.
Berber Jin
You're going to Hollywood.
Ryan Seacrest
Carrie Underwood joins Lionel Richie, Luke Bryant and Ryan Seacrest on American idol. Season premieres Sunday, 8, 7 Central on ABC and stream on Hulu.
Charla Gartenberg
Welcome to Tech News briefing. It's Thursday, March 6th. I'm Charla Gartenberg for the Wall Street Journal. Artificial intelligence tools are being used by companies across industries. One sector that's seeing big changes, coding.
Isabel Bousquet
WSJ reporter Isabel Bousquet tells us how generative AI is transforming coding development jobs.
Charla Gartenberg
And why it could be paving the way for leaner teams then. Ilya Sutskever, former Chief Scientist at OpenAI, is one of the most revered AI researchers in the industry. His new startup is already worth $30 billion.
Isabel Bousquet
But what does his secretive company, Safe Superintelligence, do? Our reporter Berber Jin shares what we know about the startup so far. But first, AI coding tools can automate large portions of code development. But will this tech replace human workers? For that answer, we're talking to our reporter Isabel Bousquet, who covers enterprise tech. All right, Isabel, I know there's been some panic over AI taking over jobs, but that's not quite what's happening here. AI can't just write the entire code for you, can it?
That's right. You probably wouldn't want to leave that job up to AI, at least quite yet. But what we're finding are these tools are actually doing a pretty good job of being assistants and helping coders and developers essentially get a lot more code written at a much faster rate than they could work before. These teams are being a lot more efficient. We're seeing companies citing typically like double digit efficiency numbers, anywhere from 10 to 20 to 30% efficiency. It's less of a question of, oh, your entire development team is gone tomorrow, and more of a question of, wow, your development team is doing a lot more work than they ever could in the past. And what does that mean?
Charla Gartenberg
How does this AI coding assist make it faster?
Isabel Bousquet
Essentially, these AI tools are trained on a lot, a lot, a lot of code. Typically, they do a really good job. If there's standard boilerplate, busy work type of code that you might have to write code that's a little more commoditized, a lot of that can just be automated. And you would do that by essentially prompting the model, you would, would go in and explain in English what you need it to do. Sometimes it can almost work like an autocomplete scenario. You can sort of also think about the coding assistance like that, anticipating what sort of might need to come next and suggesting that.
Charla Gartenberg
So how widespread is the use of AI coding tools right now?
Isabel Bousquet
It's pretty widespread. Most big companies are either using some iteration of these or thinking about or exploring them. A couple years ago, when ChatGPT sort of propelled this idea of generative AI into the public consciousness and all these companies were scrambling to try to figure out what they could do with AI, they found that this coding use case was actually one of the earliest use cases that could deliver clear efficiencies. One of the most popular tools here is GitHub Copilot, which is owned by Microsoft. They said in their earnings that they've been adopted by more than 77,000 organizations. So pretty widespread. But there are plenty of other tools out there as well.
Charla Gartenberg
So how is this changing how companies are looking for talent?
Isabel Bousquet
There's a lot of really interesting dynamics at play here. The first question of are jobs going to disappear? Companies are really hesitant to say yes, jobs are going to disappear, but what they are willing to say is we're doing more with smaller teams. It's also important to acknowledge that these coding tools have room to grow. They tend to be better at generating new code when you're in a position where you need to write essentially net new code than they are at migrating or updating existing code, which is something that big legacy companies end up doing a lot of just sort of maintaining their existing code. The jobs of the developers will essentially change now that they have to spend less and less time sitting writing code. They'll be able to spend more time thinking about how to use the AI tools, how to prompt the AI tools. There are some really interesting workforce dynamic changes happening here.
That was WSJ reporter Isabel Bousquet.
Charla Gartenberg
Coming up, AI researcher Ilya Sutskever's new startup has already raised $30 billion thanks to the founder's reputation. What we know so far about the secretive company Safe Superintelligence after the.
Ryan Reynolds
Ryan Reynolds here from Mint Mobile. I don't know if you knew this, but anyone can get the same Premium Wireless for $15 a month plan that I've been enjoying, it's not just for celebrities. So do like I did and have one of your assistant's assistants switch you to Mint Mobile today. I'm told it's super easy to do@mintmobile.com.
Isabel Bousquet
Switch upfront payment of $45 for 3 month plan equivalent to $15 per month Required intro rate first 3 months only, then full price plan options available, taxes and fees extra. See full terms@mintmobile.com.
Charla Gartenberg
Ilyas Sutskever is one of the most revered researchers in the AI industry. He co founded OpenAI in 2015 with Sam Altman and Elon Musk. He was the company's former chief scientist and he helped develop the language model technology that underpinned ChatGPT. But Sutskever left OpenAI last year. His new startup, Safe Superintelligence, is already worth $30 billion, making it one of the most valuable companies in tech. Our reporter Berber Jin covers startups and venture capital and he's here now with more on Sutskever and his secretive startup. And before we get into it, we should note that News Corp, owner of the Wall Street Journal, has a content licensing partnership with OpenAI. So Berber, why did Ilya Sutskever leave OpenAI last year?
Berber Jin
Suskiver was one of the board members who fired Sam Altman, famously in November 2023. At the time, he had grown distrustful of Altman and the two of them were also fighting over how to allocate OpenAI's scarce computing resources. So Sutskever was kind of a more pure research, technical mind. So he really wanted OpenAI's computing power to be devoted towards creating safe superintelligence, devoting everything towards creating the most powerful AI possible in the lab. And Altman was much more commercially focused after ChatGPT. He wanted to grow OpenAI's revenue, he wanted to release products. So they were clashing a little bit over the direction of the company. And at the same time there were all of these interpersonal tensions that grew where Sutskever felt Altman wasn't being completely truthful in his dealings with the board. And he very famously was the one who actually told Altman to click on a Google Meet where Altman would get fired. And that triggered the four day crisis within the company where Altman was ultimately reinstated. After that, Suskover, he essentially disappeared from the company. It was a very difficult experience for him because he essentially recanted and said he regretted firing Altman. There was a lot of pressure for him to return to the company, but he was feeling very conflicted and he ultimately decided last May that to leave OpenAI to co found his own startup, Safe Superintelligence.
Charla Gartenberg
So what's Safe Superintelligence?
Berber Jin
Safe Superintelligence is what Sutskever calls the world's first straight shot lab devoted to creating superintelligence, this idea of an AI that basically surpasses humans at every task possible. He released a manifesto for the lab when he co founded it that was very sparse on details, but what he said in that manifesto was that safe superintelligence, the startup, would be devoting all of its resources and energy towards creating superintelligence. He said they wouldn't release products, they wouldn't focus on growing revenue, any attribute of a fast growing startup wanting to scale the business, get customers. He essentially said, no, we're not going to focus on any of that. We want to basically build the world's most powerful AI. That's essentially all we know about the startup and what it's planning to do.
Charla Gartenberg
Do we know anything about how he plans for safe superintelligence to make money?
Berber Jin
So what Suskiver has said about safe Superintelligence is he's discovered what he calls a different mountain to climb when it comes to developing and improving AI models. So right now, all the leading labs, including OpenAI and Google and Anthropic, they essentially are saying the way to build more powerful AI is to d more computing power and dump more data to train these models. Suskever has said that that thesis is broken. And he's alluded to having discovered something else that could sort of hold the key to developing AI faster than anyone else. But he's keeping it very close to the chest. He's not even telling some of his investors what that approach is. That's the big question behind his startup is have they discovered something new that no one else has discovered? For example, a much cheaper way to develop advanced AI. And if that's the case, it could essentially restack the entire pecking order in the AI race. Let's say they discover something that OpenAI isn't able to discover, or Google's not able to discover. All of a sudden those companies might be left in the dust. OpenAI has a $300 billion valuation. All of that is at risk if Sutskever has actually caught onto something that no one else has caught onto.
Charla Gartenberg
Okay, it's a secretive company, but it has some big backers. Who are they?
Berber Jin
A lot of top Silicon Valley investors have backed the company. Sequoia Capital, Andreessen Horowitz, Green Oaks Capital, which is a very well known venture firm in San Francisco. The question is, what are those investors seeing? Are they getting an inside peek at what he's doing? It's just too early to tell. They're essentially betting on the man himself. In Silicon Valley. Venture capitalists like to talk about how they bet on a founder and doesn't matter if they haven't developed a product or have a path to profits. They're like, we believe in this guy and we're going to put money behind him. Sutskever is the most extreme example of that that I've seen, having covered Silicon Valley for many years.
Charla Gartenberg
That was WSJ tech reporter Berber Ginn. And that's it for Tech News Briefing. Today's show was produced by Jess Jupiter with supervising producer Kathryn Millsop. I'm Charlotte Gartenberg for the Wall Street Journal. We'll be back this afternoon with TNB Tech Minute. Thanks for listening.
WSJ Tech News Briefing: The Scientist Who Left OpenAI and Started a $30 Billion Firm
Release Date: March 6, 2025
Introduction
In the March 6th episode of WSJ Tech News Briefing, host Charla Gartenberg delves into the transformative impact of artificial intelligence (AI) across various industries, with a particular focus on coding and software development. The episode also sheds light on a significant shake-up in the AI landscape: Ilya Sutskever's departure from OpenAI to launch his own $30 billion startup, Safe Superintelligence.
AI Revolutionizing Coding Development
Timestamp: 00:33 - 04:59
AI Coding Tools Enhance Developer Efficiency
Charla Gartenberg introduces the segment by highlighting the pervasive adoption of AI tools in coding across industries. Isabel Bousquet, a WSJ reporter covering enterprise tech, provides an in-depth analysis of how generative AI is reshaping coding development jobs.
Notable Quotes:
Efficiency Gains Without Job Displacement
Bousquet addresses common concerns about AI replacing human developers, clarifying that AI serves as an assistant rather than a replacement. The integration of AI tools like GitHub Copilot has led to significant efficiency improvements, with companies reporting a 10-30% increase in productivity.
Notable Quotes:
Widespread Adoption and Future Implications
The discussion highlights the widespread use of AI coding tools, emphasizing their role in enabling leaner development teams. While tools like GitHub Copilot are widely adopted, the potential for these technologies to streamline operations continues to grow.
Notable Quotes:
Ilya Sutskever's Departure from OpenAI and the Rise of Safe Superintelligence
Timestamp: 05:58 - 11:23
Background on Ilya Sutskever and OpenAI
Charla Gartenberg transitions to a high-profile story featuring Ilya Sutskever, OpenAI's former Chief Scientist and co-founder. Sutskever's new venture, Safe Superintelligence, has swiftly reached a valuation of $30 billion, positioning it as a major player in the tech industry.
Notable Quotes:
Reasons Behind the Departure
Berber Jin, a WSJ reporter specializing in startups and venture capital, explains the circumstances leading to Sutskever's exit from OpenAI. A clash over the company's direction—between focusing on safe superintelligence research versus commercial product development—culminated in internal conflicts and ultimately Sutskever's departure.
Notable Quotes:
Introducing Safe Superintelligence
Safe Superintelligence is portrayed as a hyper-focused research lab dedicated solely to developing superintelligence—AI that surpasses human capabilities across all tasks. Contrary to typical startups, it emphasizes research over revenue, aiming to build the world's most powerful AI without the distractions of product launches or scaling operations.
Notable Quotes:
Funding and Strategic Implications
Despite its secretive nature, Safe Superintelligence has attracted significant investment from top Silicon Valley venture capital firms, including Sequoia Capital and Andreessen Horowitz. Jin discusses the uncertainties surrounding the startup's business model and strategic innovations, which could potentially disrupt the current AI hierarchy.
Notable Quotes:
Potential Impact on the AI Industry
Jin speculates on the possible outcomes of Safe Superintelligence's advancements. If Sutskever's approach diverges from the industry norm—particularly his skepticism about scaling AI through increased computing power and data—his company could redefine the competitive landscape, challenging giants like OpenAI and Google.
Notable Quotes:
Conclusion
The episode concludes by reiterating the significance of Ilya Sutskever's move from OpenAI to founding Safe Superintelligence, framing it as a pivotal moment in the AI industry's evolution. With substantial backing and a bold vision for superintelligence, Safe Superintelligence stands poised to make profound impacts on the future of AI development and its applications across various sectors.
Notable Quotes:
Key Takeaways:
Notable Speakers:
This summary encapsulates the key discussions, insights, and conclusions from the March 6, 2025 episode of WSJ Tech News Briefing, providing a comprehensive overview for those who have not listened to the podcast.