Today on the AI Daily Brief, as DeepSeek releases yet another model, is China now ahead on AI? Before that in the headlines, Perplexity is the latest American company to want to buy TikTok. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes.
We kick off today with a fun one, which is that perplexity apparently is joining the queue of American companies that wants to buy TikTok. We are, of course, several months now into the TikTok divestment story. Honestly, several years at this point. But progress does seem to have stalled. President Trump delayed the process by 75 days by executive order. However, that deadline is approaching on April 5th. One of the key issues in the minefield, among many others, is the potential for monopoly risk. Basically, the big tech companies are some of the very few who could afford an acquisition of TikTok.
but adding them would absolutely contribute to antitrust concerns. Given that, many of the viable deals were coming from private consortiums, which added the risk of a major U.S. communications platform being functionally owned by foreign investors. Enter Perplexity. The company has claimed that they are singularly positioned to rebuild the TikTok algorithm without creating a monopoly, combining world-class technical capabilities with little tech independence.
In a blog post that reads much more like a manifesto, Perplexity argues that control of the algorithm, not the company, is the key issue. They argued that a private consortium would functionally keep parent company ByteDance in control, while acquisition by a competitor would create a monopoly. Perplexity writes, "...all of society benefits when content feeds are liberated from the manipulations of foreign governments and globalist monopolists."
The company continued, the TikTok algorithm today is a black box. We believe these recommendation systems should be transparent. To eliminate risks of user manipulation, we propose rebuilding TikTok's algorithm from the ground up with transparency as the guiding principle. Our promise is to turn TikTok into the most neutral and trusted platform in the world. To achieve this, we commit not only to developing a new algorithm, but also making the TikTok4U feed open source.
Perplexity then included a seven-point plan for reshaping the platform with American interests at heart. Alongside the open source proposal, they suggested upgrading the AI systems and hosting them on hardware based in the US, as well as adding citations and translation. Perplexity also suggested integrating their tech into the search function and enhancing personalization. A big sticking point of this, of course, would be purchase price. Analysts have suggested that serious bids start at $30 billion and could be as much as $50 billion.
Perplexity recently made news for reportedly seeking a billion dollars in funding at an $18 billion valuation. All of which is to say that this probably is more about Perplexity inserting themselves in the conversation than it is about an actual deal coming together. Perplexity have been doing lots and lots to stay in the news cycle, but this TikTok thing is something that we should have more information about how it's going to evolve over the next couple of weeks.
Next up, Meta is testing a new AI feature on Instagram, suggesting AI-generated replies to users. And the context for this is that Meta has had something of a hard time creating compelling AI use cases on their own social media platforms. Some executives have suggested that social media will have millions of AI accounts interacting with regular users, but testing that theory went very poorly, with Meta facing a sharp backlash against their AI-generated accounts.
The new feature, called Write with Meta AI, was spotted by ex-user Jonah Manzano. Users that have access to the feature can generate automatic replies on Instagram pictures, taking the context into consideration. Manzano demoed the feature applied to a picture of a smiling guy in his living room, with the AI suggesting three replies, cute living room setup, love the cozy atmosphere, and great photoshoot location.
TechCrunch wrote,
Frankly, I'm not particularly sure when Instagram was authentic. And I'm also not totally sure that we need to preserve the sanctity of the Instagram comment section. I also think that this particular use case, like it or loathe it, is absolutely inevitable. And we're going to see versions of this on absolutely every single social network. So if you don't like it, I apologize, but this is probably where we're headed. I'm sure there will be artisanal, no AI allowed social networks for us in the future.
Moving over to Google, that company is rolling out real-time video functionality for Gemini. Gemini can now take inputs from screens and camera feeds in real-time. The feature was first previewed by Google last year as a part of Project Astra, which was envisioned as a next-generation AI assistant. The feature is now available for some premium subscribers beginning this week, with a full rollout planned for the coming weeks. Google is the second lab to integrate this type of real-time video into their AI assistant, with OpenAI releasing the feature in December.
Both companies include video at their $20 price point, making this now a default modality for paid AI assistance. More broadly, this rollout seems to confirm Google's strategy consolidating their product line into the core Gemini experience. Last week, the company integrated the Notebook LM generative podcast feature into Gemini, aka audio overviews, allowing users to seamlessly do things like explain reports generated with their version of deep research.
So will it work to have Gemini be one unified experience? Time will tell, but for now, that is going to do it for today's AI Daily Brief Headlines Edition. Next up, the main episode. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded.
Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk.
Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time.
For a limited time, this audience gets $1,000 off Vanta at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. All right, AI Daily Brief listeners, today I'm excited to tell you about the disruption incubator. One of the things that our team sees all the time is a lot of frustration from enterprises. There's a fatigue around small incremental solutions.
a concern around not thinking big enough, tons of bureaucratic challenges, of course, inside big companies. And frankly, we just hear all the time from CEOs, CTOs, other types of leaders that they want to ship some groundbreaking AI agent or product or feature. In many cases, they even have a pretty well thought out vision for what this could be. Their teams are just not in an environment conducive to that type of ambition. Well, it turns out our friends at Fractional have experienced the exact same thing.
Fractional are the top AI engineers specializing in transformative AI product development. And to answer this particular challenge, they have, with perhaps a little bit of help from Superintelligent, set up what they're calling the disruption incubator for exactly this type of situation.
The idea of the disruption incubator is to give a small group of your most talented people an overly ambitious mandate, something that might have taken one to two years within their current construct. Send them to San Francisco to work with the team at Fractional, and within two to three months, ship something that would have previously been impossible.
The idea here is that you are not just building some powerful new agent or AI feature, but you're actually investing in your AI leadership at the same time. If this is something interesting to you, send us a note at agent at besuper.ai with the word disruption in the title, and we will get right back to you with more information. Again, that's agent at besuper.ai with disruption in the subject line.
Today, we are talking about one of the big themes dominating the conversation around AI in 2025, which is, of course, about China outcompeting the U.S. We have right now a new model from DeepSeek, new high-performance chips, and leading AI scientists returning to China. People are starting to ask with a straight face, is China pulling ahead on AI?
Right now, we have seen very few actual AI policies when it comes to the new Trump administration, at least beyond getting out of the way and letting AI companies build.
To the extent that there is any one clear policy goal, it's that China must not win the AI race. The release of DeepSeek R1 triggered a reckoning in Washington, with policymakers, and everyone else frankly, being forced to grapple with the idea that China was catching up. The event was pivotal enough that Marc Andreessen called it AI's Sputnik moment. And now, just a few months later, it's starting to look like China has, if not taken the lead, at least seems to be moving at a much quicker pace than US competitors.
If DeepSeek was a wake-up call in the West, it was a bona fide phenomenon in China. The interviewing months have been filled with adoption stories and what appeared to be a nationwide push to integrate DeepSeek into absolutely everything. So what are the latest stories that have this conversation picking up heat again today? The first is that we've gotten a new DeepSeek model, their V3 Foundation model. This open-source model was launched unceremoniously on Hugging Face yesterday without any form of announcement.
The release even came with a completely empty README file, just model weights and a commercial use license. Benchmarks showed improved capabilities in reasoning and coding. Xeophon wrote, Tested the new DeepSeek v3 on my internal bench, and it has a huge jump in all metrics on all tests. It is now the best non-reasoning model, dethroning Sonnet 3.5.
This is an important point too, as people get a little bit hysterical right now. This is not DeepSeek's next reasoning model, which presumably will be called R2 and is something that people are waiting for with bated breath. But even without that, what's really grabbing headlines is a big boost to efficiency and speed.
Apple machine learning researcher Aoni Hanan tested the model on his 512GB M3 Ultra Max studio and had it running at 20 tokens per second. Admittedly, this is only in 4-bit mode and the $9,500 Apple computer isn't quite consumer-grade hardware, but it is still a $685 billion parameter state-of-the-art model running on hardware that costs less than a cheap used car. VentureBeat writes...
This represents a potentially significant shift in AI deployment. While traditional AI infrastructure typically relies on multiple NVIDIA GPUs consuming several kilowatts of power, the Mac Studio draws less than 200 watts during inference. The efficiency gap suggests the AI industry may need to rethink assumptions about infrastructure requirements for top-tier model performance. VentureBeats reporting also honed in on the difference in AI strategy between the East and the West. They noted that US leaders like OpenAI and Anthropic have kept their models behind paywalls,
While the Chinese approach has been to open source as much as possible, writing, This has accelerated China's AI capabilities at a pace that has shocked Western observers. They added,
This philosophy is rapidly closing the perceived AI gap between China and the United States. Just months ago, most analysts estimated China lagged one to two years behind US AI capabilities. Today, that gap has narrowed dramatically to perhaps three to six months, with some areas approaching parity or even Chinese leadership.
And indeed, if DeepSeek follows their previous release pattern, we could see a new version of their reasoning model within the next few weeks. Certainly, the rumor mill is up and running, with SmokeAway spreading the rumor that DeepSeek R2 has scored 90% on the RKGI benchmark, which would beat OpenAI's O3 score of 87%. Chubby writes, this would be absolutely nuts and would catapult China. If this is true, I don't see any way how ClosedSource is going to win the AI race.
Now, many pointed out, however, that this is a rumor with literally no sourcing and is highly unlikely to be true. But at the same time, the resonance of this rumor is perhaps the story in terms of just how fast attitudes are shifting about where China sits in the AI race more broadly. On the chip front, China also seems to be catching up. Bloomberg reports that Alibaba parent company Ant Group has developed a technique to cut training costs by 20% using Huawei's AI chips.
The firm has adopted a mixture of experts architecture, which is the same architecture used by DeepSeek. However, the big news is that Ant Group saw these results on training runs using Nvidia's H800 chips, which are the downrated chips designed to comply with export controls. This suggests that Huawei's chips can substitute for US GPUs if necessary, and export controls aren't a major impediment to advanced model training. Ant Group laid out their process in a research paper earlier this month, which highlighted their goal of scaling a Frontier LLM without using premium GPUs.
They said it cost around $880,000 to train a trillion-token model on high-performance hardware, but they expected to cut that down to $700,000 on lower-spec hardware using their method. Robert Li, a senior Bloomberg analyst, said, "AntGrip's paper highlights the rising innovation and accelerating pace of technological progress in China's AI sector. The firm's claim highlights China is well on the way to becoming self-sufficient in AI as the country turns to lower-cost, computationally efficient models to work around the export controls on NVIDIA chips."
Former quant investor Jeffrey Emanuel commented, Well, that didn't take too long. These Huawei AI training chips are barely on the radar of U.S. companies and stock market investors, but they really should be if well over 40% of NVIDIA's true sales come from China, including Singapore and Vietnam because of smuggling. Huawei is, however, on the radar of NVIDIA CEO Jensen Huang. In remarks last week, he said, Huawei is the single most formidable technology company in China. They have conquered every market they've engaged. I
I think their presence in AI is growing every single year. We can't assume they're not going to be a factor. The other dimension of China-U.S. competition taking a troubling turn is the struggle to retain talent. Earlier this month, a Republican bill to ban Chinese students made waves, with many believing that such a move would stifle the talent pipeline. Still, perhaps an even more pressing issue is that some of the leading Chinese AI scientists are starting to self-deport.
43-year-old former Microsoft researcher Guojun Qi has joined the Westlake University in Hangzhou after a decade-long career in the U.S. During that time, he was chief AI scientist at Huawei Research USA and also won several prestigious awards, including the Microsoft Fellowship, the IBM Fellowship, and the award for best paper at the Association of Computing Machinery's International Conference on Multimedia.
Qi said, I was drawn to the free-spirited atmosphere at Westlake University and wanted to come back and pursue something I truly wanted to do. A China-based British medical AI researcher going by Luke commented, Big AI news in China. Dr. Qi is an absolute legend in AI, who worked in the US for 10 years, has decided to come back to China. For him to leave the States and head to China has to be a huge wake-up call for AI research friends. And while the rest of the post kind of reads like a CCP plant account, the resonance of these posts is once again part of the story.
Dai WW, identifying only as a Chinese national interested in geopolitics, posted, The American AI industry is now facing a dilemma. If it doesn't use Chinese engineers, its AI will fall behind China's. If it uses Chinese engineers, they will return to China. Now, while one could assume that this was a political move to recall a top AI researcher and generate fodder for propaganda, even if that's the case, the contest for talent is still real.
As of 2022, China trained 47% of the world's top quintile AI undergrads. The U.S. graduated around 18%. Liberal commentator Matt Yglesias wrote, Nobody wants to hear it, but it's hard for America to have more people than China working on AI or batteries or drones or any other key field, as long as there are four times as many Chinese as Americans. You need an abundance of people. Ultimately, the question of whether China is in the lead or merely catching up is secondary to the point that the entire tenor of the contest has changed in just a few short months.
Much reporting suggests that open-source and wide-scale AI deployment are now official government policy. And while we're not quite at the stage where Chinese models are being pushed abroad in a digital belt and road initiative, many in Washington believe that that could be the next step. Investor Balaji Srinivasan wrote a long thread on what he sees happening. He called it AI overproduction and wrote, China seeks to commoditize their complements. So over the following months, I expect a complete blitz of Chinese open-source AI models for everything from computer vision to robotics to image generation.
Why? I'm just inferring this from public statements, but their apparent goal is to take the profit out of AI software since they make money on AI-enabled hardware. Basically, they want to do to US tech the last stronghold what they already did to US manufacturing, namely copy it, optimize it, scale it, and then wreck the Western original with low prices. China thinks it has the opportunity to hit US tech companies, boost its prestige, help its internal economy, and take the margins out of AI software globally, at least at the model level.
I don't know if they'll succeed at the app layer, but it could be hard for closed-source AI model developers to recoup the high fixed costs associated with training state-of-the-art models when great open-source models are available. I agree that it's surprising that the country of the Great Firewall is suddenly the country of open-source AI, but it is consistent in a different way, which is that China is just focused on doing whatever it takes to win, even to the point of copying partially abandoned Western values like open-source, which seem to be the hardest thing to adopt.
And when it comes to this idea of China doing anything to win, that's basically the point that Morgan Stanley analysts are making as well. They write, DeepSeek's emergence is more than just an AI milestone. It's a timely symbol of China's ambition to claim a leadership role in the tech revolution. Beyond financial markets, DeepSeek's emergence comes at a moment of national pride. The box office triumph of Nizia 2 and video game black myth Wukong have given rise to a grassroots confidence rather than a top-down directive from Beijing, reinforcing newfound belief in national identity and tradition.
Kai-Fu Lee, the founder of Chinese startup ZeroOne.ai, is just about ready to claim the lead, stating, Previously, I think it was a six- to nine-month gap and behind in everything. And now I think that's probably three months behind in some of the core technologies, but actually ahead in some specific areas. He argued that U.S. export controls had been a double-edged sword, forcing innovation to speed up in China, saying, The fact that DeepSeeker are able to figure out the chain of thought with a new way to do reinforcement learning is either catching up with the U.S. learning quickly or maybe even more innovative now. So as to this question of is China ahead,
I think in practical terms, the answer remains no. Bindu Reddy writes, A myth is floating around that China is ahead of the US in AI. No, it's not. Here are the best models. Reasoner 01, Coder, Sonnet 3.7, Instruct Model GPT 4.5, OCR, Gemini Flash, Realtime, Grok 3, Video, VO, Image, Flux Ultra. Zero of these models are from China. Almost all of them are from the US. Sure, China has good open source models, but they're used by less than 1% in the real world.
I think that's kind of cold comfort, though. What people are watching isn't the static analysis of where things are right now. What they're watching is the trend lines. Right now, the vibe is that China is outcompeting the U.S. when it comes to AI. And even if they haven't superseded the state of the art, that's the trend line they're on.
What's more, who's to say that the only thing that matters in the AI competition is owning the state of the art? Bindu writes that they're used by less than 1% in the real world, but that could change fast if China can totally change the cost profile and evangelize their own models. There are going to be lots and lots of applications of AI that are fine with the quote-unquote good enough thing that's just 5-10% worse than whatever the leading version is. Anyways, this continues to be a fascinating dimension of this story. It certainly adds an extra layer of dynamism to the whole space.
For now though, that is going to do it for today's AI Daily Brief. Appreciate you listening or watching as always. And until next time, peace.