We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
A
Adam Cochran
B
Bloomberg记者Dave Lee
C
Compound248
C
CoreWeave联合创始人Brandon McBee
E
Elon Musk
以长期主义为指导,推动太空探索、电动汽车和可再生能源革命的企业家和创新者。
F
Fernando Cao
P
Perplexity CEO Aravind Srinivas
R
Raoul Pal
主持人
专注于电动车和能源领域的播客主持人和内容创作者。
Topics
主持人: CoreWeave 的 IPO 反映了当前市场环境的不确定性和波动性,而非仅仅是 AI 基础设施泡沫的破裂。CoreWeave 的财务状况透明地反映了 AI 行业的风险,可能暗示着 AI 基础设施泡沫的存在。许多AI服务的用户都抱怨模型选择器的问题,Perplexity改进用户体验的努力是合理的。未来 AI 领域的竞争将越来越集中在特定领域和用例上,模型的持续升级和特定领域应用将成为焦点。 CoreWeave联合创始人Brandon McBee: AI 泡沫论周期性出现,目前市场需求持续增长。 Bloomberg记者Dave Lee: CoreWeave 的财务状况透明地反映了 AI 行业的风险,可能暗示着 AI 基础设施泡沫的存在。 Perplexity CEO Aravind Srinivas: Perplexity 公司财务状况良好,没有IPO压力,Auto模式的改变是为了提升用户体验,而非节省成本。 Elon Musk: XAI 收购 X,将 XAI 的 AI 能力与 X 的海量数据相结合,创造更大的价值,推动人类进步。 Fernando Cao: Musk 收购 X 是为了获取其海量实时数据,为 XAI 的 AI 模型提供燃料,从而与 OpenAI 和 Anthropic 等竞争。 Compound248: XAI 收购 X 可能反映了 XAI 的绝望之举,其估值过高,且面临巨大的财务和竞争压力。 Adam Cochran: Musk 利用 XAI 虚高的股票估值,以高价收购 X,损害了 XAI 和 X 的投资者利益,Grok 的估值过高。 Raoul Pal: Musk 收购 X 主要目的是为了获取其 AI 训练数据。

Deep Dive

Shownotes Transcript

Translations:
中文

Thank you.

Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. We kick off today with another big story from Friday. Our main story is about XAI combining with X. This one is about CoreWeave's IPO, which unfortunately turned into a bit of an anti-climax. The company's stock dropped by 2.5% at launch on Friday after fundraising targets were heavily downsized.

Corweave raised $1.5 billion from the share sale, but had initially priced the IPO to raise over $2.7 billion. At one point in the planning phase, the stock was to be priced at $55 per share, but ended up going out at just $40. The stock ended the day flat, closing at $40 and receiving no IPO pop.

Bloomberg sources said that half of the shares went to the three largest investors in the deal, with 90% going to the top 15. One of those large investors was NVIDIA, who took a $250 million allocation to add to their pre-existing 6% stake in the company. CEO Michael Entrader said that without NVIDIA, the IPO, quote, wouldn't have closed. He also added if 27 others didn't show up, it wouldn't have closed.

And while the headlines are calling this a dud of an IPO and pointing to CoreWeave specifically and AI more generally, what Intrader's pointing out is that this is a terrible moment to go public. The Nasdaq index also had a 2.5% overall drop on Friday, contributing to a 13% decline this month. Risk assets have been struggling mightily in a market that is characterized by insecurity and volatility, and hyped-up tech IPOs fall squarely in the risk asset category.

Now, to the extent that people are looking at what wasn't working about the IPO in the context of CoreWeave and AI rather than larger market volatility, people are reading this either as a potential signal for an AI infrastructure bubble, or they're pointing to some idiosyncratic warning signs in CoreWeave specifically.

On the AI bubble conversation, CoreWeave co-founder Brandon McBee is dismissive, saying, This conversation around an AI bubble seems to come up every three to six months or so, and then it drops away. What we see on the ground and what I'm sure you're hearing in Silicon Valley is just consistent growing demand.

Obviously, if you're a regular listener to this show, you'll know that that opinion is much closer to my view than the idea that there's some massive bubble just waiting to deflate. The CoreWeave-specific problems might be a little bit more difficult to brush off. CoreWeave already has quite a bit of debt and may need to raise more to make up for the shortfall in the IPO. That is, of course, if they actually needed that capital. The company faces repayments of $7.5 billion by the end of next year, although they could also be able to refinance.

Like many companies in this AI infrastructure space, they also have a highly concentrated customer base. Microsoft represented 62% of their revenue last year, with a further 15% coming from an unknown single large customer. Microsoft has already walked away from their option to extend leases with the company. However, CoreWeave did seal a big deal with OpenAI in the lead-up to the IPO.

Bloomberg's Dave Lee commented that unlike other big cloud providers, CoreWeave really doesn't have anywhere to hide. He wrote,

nor can it hide the interconnectedness of the industry, where a handful of huge companies are simultaneously customers, suppliers, and rivals to one another. If a bubble is forming around AI and data center build-out, as Alibaba chairman Joe Tsai warned this week, it's on the balance sheet of CoreWeave where the clues might emerge, written, for the first time, in plain black and white for all to see. I continue to be skeptical of this type of analysis, but if for no other reason than as a meta-understanding of where the market is, it's worth noting this is a fairly common opinion.

Speaking of IPOs, Perplexity CEO Aravind Srinivas is denying that the company is under financial pressure and needs to rush to an IPO. A few days ago, a Reddit user called NothingEverHappened aired out a theory on Perplexity's subreddit, writing, I've recently noticed Perplexity making lots of changes to cut costs. My theory is that they're doing horribly financially. Those charges included an insider telling them that all funding for marketing and partnerships has been paused.

Some gremlins in the service that led them to believe cloud services had been migrated away from AWS, rumors of an IPO, and layoffs which the Redditor discovered, quote-unquote, by digging into LinkedIn profiles and finding a lot of former employees. The key complaint were changes to how the services uses auto mode, which now removes model selection from the user during follow-up questions. The Redditor claimed that their follow-up questions were always answered by the default cheaper model, rather than a high-end reasoning model like OpenAI's O1.

Unless we think that this is just one complaining user, Perplexity's CEO, Aravind Srinivas, took to Reddit to post a response, which he also copied over onto X as well. Now, he didn't reference the original post, but did give some plausible explanations for each of the points and included several others addressing complaints about degradation of service.

Regarding auto mode, Srinivas claimed that it was a UX improvement to remove the model selection and follow-up questions. He wrote that the goal is to, quote, "...let the AI decide for the user if it's a quick, fast answer query, or a slightly slower multi-step pro search query, or a slow reasoning mode query, or a really slow deep research query."

The long-term future is that. An AI that decides the amount of compute to apply to a question, and maybe clarify with the user when not super sure. Our goal isn't to save money and scam you in any way. It's genuinely to build a better product with less clutter, and simple selector for customization options for the technically adept and well-informed users. This is the right long-term convergence point.

And by the way, I will say at this point that one of the big UI UX complaints around these services has been model selector type of issues. This is something that Tim Altman and ChatGPT have discussed extensively as well. Users hate the fact that they have to look through and understand which of the models is good for different things. So I don't think it's some conspiracy theory to think that Perplexity, which is an extremely UI UX focused company, is just trying to improve that part of the experience.

Now, maybe more pointedly, was this paragraph from Srinivas who wrote, Are we running out of funding and facing market pressure to IPO? No. We have all the funding we've raised and our revenue is only growing. The objective behind Automode is to make the product better, not to save costs. I've learned it's better to communicate more transparently to avoid any incorrect conclusions. Re-IPO, we have no plans of IPOing before 2028. Ultimately, when it comes to perplexity, I think that their bigger challenge is that every other frontier lab also wants to be the gateway to search.

They're all working to improve not only their underlying models, but also their search experience. That, more than any potential cost savings from auto model selection, is going to be the challenge that Perplexity has to overcome. As of recent notes, Perplexity claims that they've crossed $100 million in annualized revenue. I still remember the days when that would have been impressive. But I guess we now live in a world where AI is just growing so fast that even that can't stem rumors.

For now, though, that is going to do it for today's AI Daily Brief Headlines Edition. Next up, the main episode. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded.

Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk.

Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time.

For a limited time, this audience gets $1,000 off Vanta at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. All right, AI Daily Brief listeners, today I'm excited to tell you about the Disruption Incubator. One of the things that our team sees all the time is a lot of frustration from enterprises. There's a fatigue around small incremental solutions.

a concern around not thinking big enough, tons of bureaucratic challenges, of course, inside big companies. And frankly, we just hear all the time from CEOs, CTOs, other types of leaders that they want to ship some groundbreaking AI agent or product or feature. In many cases, they even have a pretty well thought out vision for what this could be. Their teams are just not in an environment conducive to that type of ambition. Well, it turns out our friends at Fractional have experienced the exact same thing.

Fractional are the top AI engineers specializing in transformative AI product development. And to answer this particular challenge, they have, with perhaps a little bit of help from Superintelligent, set up what they're calling the disruption incubator for exactly this type of situation.

The idea of the disruption incubator is to give a small group of your most talented people an overly ambitious mandate, something that might have taken one to two years within their current construct. Send them to San Francisco to work with the team at Fractional, and within two to three months, ship something that would have previously been impossible. The

The idea here is that you are not just building some powerful new agent or AI feature, but you're actually investing in your AI leadership at the same time. If this is something interesting to you, send us a note at agent at bsuper.ai with the word disruption in the title, and we will get right back to you with more information. Again, that's agent at bsuper.ai with disruption in the subject line.

Welcome back to the AI Daily Brief. Today we are discussing the state of the battle among the frontier labs for AI supremacy and the specific context

context for the conversation is late on Friday, we got news Elon Musk's XAI, which is of course the parent company of Grok and the home of all his generative AI adventures, had acquired X, which is of course the former Twitter. The announcement post, which came out at 5.20 p.m. Eastern time on Friday, read, XAI has acquired X in an all-stock transaction. The combination values XAI at $80 billion and X at $33 billion, which is $45 billion less $12 billion in debt. It's

Since its founding two years ago, XAI has rapidly become one of the leading AI labs in the world, building models and data centers at unprecedented speed and scale. X is the digital town square where more than 600 million active users go to find the real-time source of ground truth and, in the last two years, has been transformed into one of the most efficient companies in the world, positioning it to deliver scalable future growth. XAI and X's futures are intertwined. Today, we officially take the step to combine the data, models, compute, distribution, and talent.

This combination will unlock immense potential by blending XAI's advanced AI capability and expertise with X's massive reach. The combined company will deliver smarter, more meaningful experiences to billions of people while staying true to our core mission of seeking truth and advancing knowledge. This will allow us to build a platform that doesn't just reflect the world, but actively accelerates human progress. I would like to recognize the hardcore dedication of everyone at XAI and X that has brought us to this point. This is just the beginning.

Now, on the one hand, the companies are both private and Elon presumably has the support of investors, so he can pretty much do here what he wants. Still, the deal is far from normal. The Wall Street Journal reports, "...the new valuations were determined during negotiations between the two Musk arms, which both had the same advisors, people familiar with the matter said."

The last time XAI raised money was in December, and it was thought to be valued at around $40 billion, so this deal implies a doubling in three months. That's obviously quite an acceleration, but not necessarily totally out of sync with the world of AI.

The Journal points out that this isn't the first time Elon has done something like this. Back in 2016, Elon used Tesla stock to buy his solar energy company, SolarCity. Musk is apparently a great dealmaker when he's negotiating with himself. Still, if you hold aside the mechanics and the valuations, it's very clear why this deal makes sense for the two companies.

Musk has already been open that the Grok model was trained on X data, and the chatbot is now embedded in the platform as a native assistant. The two platforms are already deeply entwined in their user experience, their resources, and even some of their personnel. For X, the merger takes the pressure off of it to thrive exclusively as an independent social media platform. Advertising revenue has not been without challenges since Musk took over in 2022. And while there have been signs that the numbers had recovered in the past few months,

X now has the additional economic value of simply becoming a data repository and a portal for XAI. Opinions on this deal, as with so much in Elon Musk world, basically come down to what you think of Elon Musk. The pro-Musk side is represented by posts like this one from Fernando Cao, which writes...

When Musk bought Twitter, everyone was confused. Why would a man focused on electric vehicles, space travel, and neural interfaces want a social media platform? But Musk had a vision that's only becoming clear now. He wasn't buying a social media company, he was acquiring a data goldmine. X had something incredibly valuable that most AI companies desperately need. Real-time, human-generated diverse data from 600 million active users. This is the perfect fuel for AI models, and it's exactly what XAI needs to compete with OpenAI and Anthropic.

Investor Chamath Palihapitiya writes, The currently best-ranked consumer AI model has just acquired the most complete corpus of scaled real-time information on the internet. The data will be a part of the pre-training to make the models XAI makes even more differentiated. This is a smart move in a moment when other model makers are caught up and slow down in copyright lawsuits, like OpenAI for training data or pre-training quality like Meta.

On the flip side is the common take represented by this one from Compound248. They write, It's hard to know what to make of XAI buying X. My gut is at smacks of desperation. On the surface, the deal values X flat to Twitter's 2022 takeout value, despite massive underperformance on financial metrics. Sounds like a win for X shareholders.

But it is a stock deal, and ex-owners will now own 29% of combined companies, shifting from a near-peer-play social media bet to an AI bet that's very much on the come plus a diluted share in Twitter. Yes, XAI is a powerful model, but not unusually so. XAI has de minimis revenue, is hemorrhaging cash, and its prospective business opportunity seems very difficult given the relevant competition A, has a head start, B, is a murderer's row, and

And C has existing business and go-to-market strategies to build on. X's $12 billion of very high-cost debt isn't going away. It will be in perpetual cash-raising mode until that changes, which leaves it at risk to the whims of the fundraising environment and the temperature of macro animal spirits. I wouldn't bet against Elon, but I'd be very nervous as a combined company owner.

Now, honestly, this is actually fairly middle of the road. A more reflective antagonistic take comes from Adam Cochran, who wrote, in other words, Musk used his pumped up XAI stock to pay multiple times over value for X, but still take an 11 billion loss on the transaction while screwing over XAI investors and X investors and to sell your data to his own AI company. Also, grok at 80 billion is an insanely dumb valuation. The one thing grok does well is live time from access to Twitter data, but otherwise it's not a breakthrough model and it's terribly monetized.

Again, the middle of the road take really focuses on data. Raoul Pal from Real Vision writes, it was always about the data for the AI. I talked about this when he first bought X and said it was a bargain back then due to the AI training data. And the AI is all about the robots and the robots are all about Mars, as is everything else.

I think that while the focus on data makes sense, people might be underestimating the value of the integrated experience with Twitter content. To the extent that these companies are all competing to be the next generation search portal where people begin their internet experience, Grok offers something fundamentally different that none of the competitors, Google, OpenAI, Anthropic, Perplexity, etc., offer, which is the ability to integrate the meta conversation into deep research.

I think I have a particular point of view on this given that I've now built two podcasts, for both of which a major value proposition is the fact that we don't just talk about the news, we talk about the discussion around the news. The thing that takes longest in producing both the breakdown and the AI Daily Brief is going through and understanding dozens if not hundreds of different opinions around anything that's happening in order to be able to synthesize that into a coherent view not just of what actually happened, but what's likely to happen next based on how people are receiving that news.

This is, again, not commenting on any of the questions of self-dealing or valuations or anything like that. I just think that the Grok Twitter merger has value beyond just a pre-training data play. So what's happening, though, beyond Grok if we're using this as a way to catch up on the state of the AI frontier lab battle? Well, elsewhere in the AI space, the biggest players seem to be duking it out for leadership in the major verticals.

We have, of course, discussed extensively OpenAI's new image generator, which has been just absolutely sucking all of the oxygen out of the room. But because it was so dominant last week, a lot of people missed the fact that Anthropic's dominance in coding seems to be contested for the first time in months.

The same GPT-4.0 update that made ChatGPT so much better at image generation also has made the model much better at coding. According to Artificial Analysis's Intelligence Index, GPT-4.0 is now the top non-reasoning model overtaking Anthropics' Claude 3.7 Sonnet. Now, their index combines a range of different coding and knowledge benchmarks to come up with a blended intelligence score, but digging into the coding-specific scores, GPT-4.0 is now at the top of the leaderboard. At the

At the same time, though, there's also a ton of chatter that actually, in fact, Google's Gemini 2.5 is an even better coding standard. During last week's release, the reasoning model was clearly a high-performing coding assistant based on the benchmarks, but having tested it now for a few days, VentureBeat broke down why this model could be a big step up for programmers.

They noted that like OpenAI's models, Google provides full access to chain of thought reasoning. For programmers, that means you can follow the model along precisely and audit their results, picking up and correcting errors along the way. VentureBeat wrote, In practical terms, this is a breakthrough for trust and steerability. Enterprise users evaluating output for critical tasks like reviewing policy implications, coding logic, or summarizing complex research can now see how the model arrived at an answer. That means they can validate, correct, or redirect it with more confidence.

It's a major evolution from the black box feel that still plagues many LLM outputs. Many coders have also discovered the Gemini 2.5 Pro is much better at succeeding at one-shot tasks. The strong reasoning is a possible explanation. The model lays out its design and code structure before writing a single line of code.

Now this could also be just an artifact of the observability, allowing programmers to see exactly what the model is doing throughout. Another benefit that could help is Gemini's 1 million token context window. Anthropic is only now preparing to release a 500,000 token context window for Claude, an upgrade from the 200,000 tokens they offer currently. Large context windows allow bigger code bases to be uploaded, and more importantly, to be understood by the model while working on coding problems.

One feature that also feels like it's underexplored at the moment is the new workflows that are opened up by the multimodal reasoning capabilities present in Gemini 2.5 Pro. Like the new version of GPT-4.0, Gemini 2.5 can apply native reasoning to image inputs. This is valuable for more than allowing these models to easily edit images.

Developers are starting to realize there's a lot of low-hanging fruit with this feature. Yancy Min, an AI Figma plugin designer, walked through a tinkering session with GPT-4.0. First, he discovered that the model can take interface code and generate an image of the interface. Then he found the model can modify the code based on visual alterations to the interface. In this case, Min brushed over a tab in the image, and the model changed the code to move the tab to the top of the screen. Gemini 2.5 Pro supports the same multimodal reasoning, and basically the TLDR is that we're barely scratching the surface on what all of this can do.

Google CEO Sundar Pichai is also hinting at a major update to agentic support, tweeting, "'To MCP or not to MCP, that's the question. Let me know in the comments.'" As you might imagine, the replies were strongly in favor of Google supporting the universal protocol for agentic tooling. And I actually think maybe to take a step back and sum this all up, this is a really reflective evolution of the frontier model battle. Increasingly, the competition is going to be less about general performance and more about specific use cases.

We're evolving in such a way that people are actually integrating these tools at the core of new and existing workflows, and they're picking the best models and the best interfaces and the best experiences and the best products for whatever it is they're trying to do. Coding is clearly one of the breakout use cases, which is why there's so much competition around it.

it. Deep research-style searching is also clearly going to be a core experience, and that's why the XAI X merger is more than just about Elon's financial engineering. Ultimately, I expect over the course of the next year, not only continuous upgrades to the underlying models, but more and more focus on these specific domain areas and specific use cases where the rubber is actually hitting the road when it comes to the business applications of the underlying technology.

Anyways, interesting stuff to kick off our week, but that's going to do it for today's AI Daily Brief. Appreciate you listening or watching as always. And until next time, peace.