We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode The Scientist Who Left OpenAI and Started a $30 Billion Firm

The Scientist Who Left OpenAI and Started a $30 Billion Firm

2025/3/6
logo of podcast WSJ Tech News Briefing

WSJ Tech News Briefing

AI Deep Dive AI Chapters Transcript
People
B
Berber Jin
I
Isabel Busquets
Topics
Isabel Busquets: 我发现AI编码工具主要扮演辅助角色,帮助程序员大幅提升编码速度和效率,通常能提高10%到30%。这并非简单的取代程序员,而是让团队能完成更多工作。AI工具擅长处理标准化、重复性高的代码,通过自然语言提示就能自动生成代码,有点像代码自动补全。目前,大多数大公司都在使用或探索这类工具。AI工具的应用也改变了公司对人才的需求,公司倾向于用更小的团队完成更多工作,程序员的工作重心将从编写代码转向如何更好地使用和提示AI工具。 然而,AI工具也存在局限性,例如在处理遗留代码的迁移和更新方面能力较弱。 Berber Jin: Ilya Sutskever离开OpenAI的原因是与Sam Altman在公司发展方向和资源分配上存在严重分歧。Sutskever更专注于安全超级智能的研究,而Altman则更注重商业化和产品发布。这种分歧导致了人际关系紧张,最终导致Sutskever离开并创立了Safe Superintelligence。Safe Superintelligence的目标是专注于创造超级人工智能,而非开发产品或追求商业利益。Sutskever认为现有的通过增加算力和数据来提升AI模型的方法是错误的,他已找到一种新的方法,但尚未公开。Safe Superintelligence的高估值主要基于投资者对Sutskever本人的信任,而非其公司具体的商业模式或产品。

Deep Dive

Shownotes Transcript

Translations:
中文

Go further with the American Express Business Gold Card. Earn three times membership rewards points on flights and prepaid hotels when you book through amxtravel.com. Whether your destination is a business conference or a client meeting, your purchases will help you earn more points for future trips. Experience more on your travels with Amex Business Gold. Terms apply. Learn more at americanexpress.com slash business dash gold. Amex Business Gold Card. Built for business by American Express.

Welcome to Tech News Briefing. It's Thursday, March 6th. I'm Charlotte Gartenberg for The Wall Street Journal. Artificial intelligence tools are being used by companies across industries. One sector that's seeing big changes? Coding.

WSJ reporter Isabelle Busquets tells us how generative AI is transforming coding development jobs and why it could be paving the way for leaner teams. Then, Ilya Setskever, former chief scientist at OpenAI, is one of the most revered AI researchers in the industry. His new startup is already worth $30 billion. But what does his secretive company, Safe Superintelligence, do? Our reporter Berber Jin shares what we know about the startup so far.

But first, AI coding tools can automate large portions of code development. But will this tech replace human workers? For that answer, we're talking to our reporter, Isabel Busquets, who covers enterprise tech.

All right, Isabel, I know there's been some panic over AI taking over jobs, but that's not quite what's happening here. AI can't just write the entire code for you, can it? That's right. You probably wouldn't want to leave that job up to AI, at least quite yet. But what we're finding are these tools are actually doing a pretty good job of being assistants.

and helping coders and developers essentially get a lot more code written at a much faster rate than they could work before. These teams are being a lot more efficient. We're seeing companies citing typically like double digit efficiency numbers, anywhere from 10 to 20 to 30% efficiency. It's less of a

question of, oh, your entire development team is gone tomorrow and more of a question of, wow, your development team is doing a lot more work than they ever could in the past and what does that mean? How

How does this AI coding assist make it faster? Essentially, these AI tools are trained on a lot, a lot, a lot of code. Typically, they do a really good job if there's standard boilerplate, busy work type of code that you might have to write, code that's a little more commoditized.

a lot of that can just be automated. And you would do that by essentially prompting the model, you would go in and explain in English what you need it to do. Sometimes it can almost work like an autocomplete scenario. You can sort of also think about the coding assistance like that, anticipating what sort of might need to come next and suggesting that. So how widespread is the use of AI coding tools right now? It's

pretty widespread. Most big companies are either using some iteration of these or thinking about or exploring them. A couple years ago, when chat GPT sort of propelled this idea of generative AI into the public consciousness and all these companies were

scrambling to try to figure out what they could do with AI. They found that this coding use case was actually one of the earliest use cases that could deliver clear efficiencies. One of the most popular tools here is GitHub Copilot, which is owned by Microsoft.

They said in their earnings that they've been adopted by more than 77,000 organizations. So pretty widespread, but there are plenty of other tools out there as well. So how is this changing how companies are looking for talent? There's a lot of really interesting dynamics at play here. The first question of, are jobs going to disappear? Companies are really hesitant to say, yes, jobs are going to disappear. But what they are willing to say is,

We're doing more with smaller teams. It's also important to acknowledge that these coding tools have room to grow. They tend to be better at generating new code when you're in a position where you need to write essentially net new code than they are at migrating or updating existing code, which is something that big organizations

legacy companies end up doing a lot of just sort of maintaining their existing code. The jobs of the developers will essentially change now that they have to spend less and less time sitting, writing code, they'll be able to spend more time thinking about how to use the AI tools, how to prompt the AI tools. There are some really interesting workforce dynamic changes happening here.

That was WSJ reporter Isabel Busquets. Coming up, AI researcher Ilya Sutskover's new startup has already raised $30 billion, thanks to the founder's reputation. What we know so far about the secretive company's safe superintelligence after the break. Business taxes. We're stressing about all the time and all the money you spent on your taxes. This is my bill?

Now Business Taxes is a TurboTax small business expert who does your taxes for you and offers year-round advice at no additional cost so you can keep more money in your business. Now this is taxes. Intuit TurboTax. Get an expert now on TurboTax.com slash business. Only available with TurboTax Live Full Service.

Ilyas Tsutskover is one of the most revered researchers in the AI industry. He co-founded OpenAI in 2015 with Sam Altman and Elon Musk. He was the company's former chief scientist, and he helped develop the language model technology that underpinned ChatGPT. But Tsutskover left OpenAI last year. His new startup, Safe Superintelligence, is already worth $30 billion, making it one of the most valuable companies in tech.

Our reporter Berber Jin covers startups and venture capital, and he's here now with more on Setskiver and his secretive startup. And before we get into it, we should note that News Corp, owner of The Wall Street Journal, has a content licensing partnership with OpenAI. So Berber, why did Ilya Setskiver leave OpenAI last year? Setskiver was one of the board members who fired Sam Altman famously in November 2023.

At the time, he had grown distrustful of Altman, and the two of them were also fighting over how to allocate OpenAI's scarce computing resources. So Sutzkever was kind of a more pure research technical mind, so he really wanted OpenAI's computing power to be devoted towards creating safe superintelligence, devoting everything towards creating the most powerful AI possible in the lab.

And Altman was much more commercially focused. After ChatGPT, he wanted to grow Opening Eyes revenue. He wanted to release products. So they were clashing a little bit over the direction of the company. And at the same time, there were all of these interpersonal tensions that grew where Sutzkever felt Altman wasn't being completely truthful in his dealings with the board. And, you

He very famously was the one who actually told Altman to click on a Google Meet where Altman would get fired, and that triggered the four-day crisis within the company where Altman was ultimately reinstated.

After that, Suskover, he essentially disappeared from the company. It was a very difficult experience for him because he essentially recanted and said he regretted firing Altman. There was a lot of pressure for him to return to the company, but he was feeling very conflicted. And he ultimately decided last May to leave OpenAI to co-found his own startup, Safe Superintelligence. So what's Safe Superintelligence?

Safe superintelligence is what Sutzkever calls the world's first straight shot lab devoted to creating superintelligence. This idea of an AI that basically surpasses humans at every task possible.

He released a manifesto for the lab when he co-founded it that was very sparse on details. But what he said in that manifesto was that safe superintelligence, the startup, would be devoting all of its resources and energy towards creating superintelligence. He said they wouldn't release products. They wouldn't focus on growing revenue. Any –

attribute of a fast-growing startup, wanting to scale the business, get customers, he essentially said, no, we're not going to focus on any of that. We want to basically build the world's most powerful AI. That's essentially all we know about the startup and what it's planning to do. Do we know anything about how he plans for Safe Superintelligence to make money? So what Suskiver has said about Safe Superintelligence is he's

he's discovered what he calls a different mountain to climb when it comes to developing and improving AI models. So right now, all the leading labs, including OpenAI and Google and Anthropic, they essentially are saying the way to build more powerful AI is to dump more computing power and dump more data to train these models.

Suskiver has said that that thesis is broken, and he's alluded to having discovered something else that could sort of hold the key to developing AI faster than anyone else. But he's keeping it very close to the chest. He's not even telling some of his investors what that approach is. That's the big question behind his startup is, have they discovered something new that no one else has discovered yet?

for example, a much cheaper way to develop advanced AI. And if that's the case, it could essentially restack the entire pecking order in the AI race. Let's say they discover something that OpenAI isn't able to discover or Google's not able to discover. All of a sudden, those companies might be left in the dust. OpenAI has a $300 billion valuation. All of that is at risk.

if Sutskiver has actually caught on to something that no one else has caught on to. Okay. It's a secretive company, but it has some big backers. Who are they? A lot of top Silicon Valley investors have backed the company. Sequoia Capital, Andreessen Horowitz, Green Oaks Capital, which is a very well-known venture firm in San Francisco. The question is, what are those investors seeing? Are they getting an inside peek at what he's doing? It's just too early to tell. They're essentially betting on...

the man himself. In Silicon Valley, venture capitalists like to talk about how they bet on a founder. And it doesn't matter if they haven't developed a product or have a path to profits. They're like, we believe in this guy and we're going to put money behind him. Sutzkever is the most extreme example of that that I've seen, having covered Silicon Valley for many years. That was WSJ tech reporter, Berber Jin. And that's it for Tech News Briefing. Today's show was produced by Jess Jupiter with supervising producer, Catherine Millsap.

I'm Charlotte Gartenberg for The Wall Street Journal. We'll be back this afternoon with TNB Tech Minute. Thanks for listening.