Thank you.
Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. And apologies, this is going to be a little bit briefer today as this is the second time I had to record this as Descript lost the first file and I've been trying to recover it. We have a couple of interesting stories though, kicking off with DeepSeek not slowing down at all. In fact, they appear to be staffing up for an AGI push.
The Chinese lab that has everyone's head in a tizzy has posted half a dozen jobs focused on developing AGI or artificial general intelligence. They're looking for data experts, deep learning researchers, and a legal chief with that legal role focused on developing a risk governance framework for AGI as well as leading communications with government agencies and regulators.
As Bloomberg puts it, the postings offer a glimpse into DeepSeek's ambition to remain at the forefront of Chinese AI. However, it also feels like this is an attempt to move beyond just doing a cheaper, faster version of the same thing that we have over here. In other words, up until now, DeepSeek has been innovative in their approach to model training and distillation, but they haven't produced anything that pushes beyond leading US models in performance. These job postings seem to indicate that that is where they want to head, which makes total sense given the success they've had so far.
Meanwhile, OpenAI has been assisting the U.S. government with their probe of DeepSeek. There's been a lot of reporting and speculation around whether DeepSeek inappropriately, or at least unauthorizedly, used OpenAI's models to train their own. Chris Lehane, OpenAI's chief global affairs officer, told Bloomberg TV, We've seen some evidence and we're continuing to review.
As DeepSeek was getting all of that press and attention, security researchers from Microsoft started to notify their partners at OpenAI that, quote, groups linked to DeepSeek were exfiltrating large amounts of data using OpenAI's API. OpenAI also said in January that it was, quote, aware of and reviewing indications that DeepSeek had trained its model on the output of OpenAI's proprietary systems through a method called distillation, which uses reasoning outputs from a model like O1 to transpose the capability into another model.
Now, one of the things that Chris Lehane, as their global affairs officer, has had to deal with was criticism that OpenAI is being hypocritical, given that they're currently defending multiple active copyright lawsuits related to their collection of training data. However, Lehane tried to explain the difference. He gave the analogy, if you go to the library and read from a book and learn from that book, that's totally fine. That happens all the time in the AI space. There's another version where you take the book, put your name on the book, slap a cover on the book, and hand it out as if it's your book. That's the replication. That's what we're concerned about and have seen some evidence of.
Indeed, not to get too wonky or to get into the copyright issues, but effectively that is what they have to prove. That the outputs of OpenAI are the effective equivalent of taking the book, putting their name on it, slapping a new cover on it, and handing it out as if it's their own, to the financial detriment of the original copyright holders. Anyways, the point is that DeepSeek continues to be a hot and going concern, both from a performance as well as from a policy standpoint.
Another OpenAI story, the company is apparently getting close to manufacturing their own AI chip. According to Reuters, they're finalizing the chip design over the next few months and preparing to send it for fabrication to TSMC. It will take around six months from that date to produce the first test run of chips, which would put OpenAI on schedule to begin larger production in 2026 if there are no major issues. That is a pretty rapid development cycle for a first chip design, which is a process that usually has taken years.
Early reports only surfaced in October that this was something OpenAI was pursuing. And basically, they're going for this for the same reason that all the big companies are, which is trying to get off over reliance on NVIDIA. They currently are working with Broadcom on the design, and they also hired former Google TPU engineer Richard Ho to lead an internal team of around 40 people. OpenAI is planning to spend hundreds of millions of dollars on this, which sounds like a lot, but then again, it's kind of incidental when you consider the $500 billion Stargate infrastructure project that they are also leading.
The AI Action Summit has begun in Paris with world leaders and AI CEOs gathering in the French capital. And the conference is being used as a way to reset the EU stance on AI in recognition that they're falling behind compared to the US and China. There is already quite a bit of controversy, a pretty blaring speech from JD Vance this morning. Basically, we are going to have a lot more to talk about here in the days to come. But just flagging that that's going on, and it shows a pretty interesting look at the geopolitics of AI at the moment.
Lastly today, an update in our Super Bowl coverage. We spent a bunch of time on Monday looking at the ads that related to AI products that were premiered on Sunday. But one company that might have won the Super Bowl without even running an ad was Perplexity. They took another tried and true strategy of Super Bowl advertising, which is to convert your money that you would have spent on an ad spot into a big old contest that distributes that money directly to people instead.
On Sunday, CEO Aravind Srinivas tweeted, there will be no Perplexity Super Bowl ad. Instead, there's a Super Bowl contest. You install the app and ask at least five questions during the game and we'll pick one winner to give a million dollars. Ask like a millionaire. According to data analytics platform AppFigures, this promotion led to a 50% increase in daily app downloads. Perplexity rose from 257 to 49 in App Store rankings.
And for all the people clamoring for the AI ads to show use cases, Perplexity's approach actually fulfilled that brief. By asking people to download the app and ask questions, it guided users to familiarize themselves with how it works. And of course, with the game, there was no shortage of sports facts to ask, so a pretty good context for this. So kudos to Perplexity for a well-executed campaign. That's going to do it, however, for the headlines. Next up, the main episode. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded.
Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk.
Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time.
For a limited time, this audience gets $1,000 off Vanta at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents buy industry horizontal agent platforms.
agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.
That's why Superintelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.
If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market. Hello, AI Daily Brief listeners. Taking a quick break to share some very interesting findings from KPMG's latest AI Quarterly Pulse Survey.
Did you know that 67% of business leaders expect AI to fundamentally transform their businesses within the next two years? And yet, it's not all smooth sailing. The biggest challenges that they face include things like data quality, risk management, and employee adoption. KPMG is at the forefront of helping organizations navigate these hurdles. They're not just talking about AI, they're leading the charge with practical solutions and real-world applications.
For instance, over half of the organizations surveyed are exploring AI agents to handle tasks like administrative duties and call center operations. So if you're looking to stay ahead in the AI game, keep an eye on KPMG. They're not just a part of the conversation, they're helping shape it. Learn more about how KPMG is driving AI innovation at kpmg.com slash US. Well, friends, the slap fight between Elon Musk and Sam Altman continues.
The latest swing from left field comes from Elon Musk, who is leading a consortium that has submitted a bid of $97.4 billion to the OpenAI board to buy the nonprofit that controls the company. The Wall Street Journal reported that the deal would be structured with XAI as the lead investor, implying a merger if it went through.
Supporting the bid were Valor Equity Partners, Barron Capital, Atreides Management, Vi Capital, 8VC, a venture firm led by Palantir co-founder Joe Lonsdale, and Ariel Emanuel, the CEO of entertainment company Endeavor. In a statement through his lawyers, Musk said, It's time for OpenAI to return to the open-source, safety-focused force for good it once was. We will make sure that happens.
You might notice that the offer is substantially short of OpenAI's most recent reported valuation of $340 billion, which is because Musk is technically bidding for all the assets of the OpenAI non-profit organization, not the for-profit company. As you would expect, Sam Altman rejected the bid, posting, "'No thank you, but we will buy Twitter for $9.74 billion if you want. Remember, Musk took Twitter private at $44 billion. However, Fidelity marked down their shares to just $9.4 billion last October.'"
Musk himself obviously responded, calling Altman a swindler, as well as posting his congressional appearance from last year. The clip where Altman said he doesn't have equity in the company, and I'm doing this because I love it. Musk captioned the clip, scam Altman.
The discussion is largely focused on whether this is a serious attempt to purchase OpenAI or not. Rob Rosenberg, the founder of Telluride Legal Strategies, said, "...I think he's trying to make a statement and bring more attention to the fact that OpenAI is still on this course to switch from being a non-profit to a for-profit company." And indeed, one way to think about this is that it is, yes, a serious move, but not necessarily with the intention of actually consummating the deal. It might more likely be a strategic play in Musk's long-running legal battle over OpenAI's conversion to a non-profit.
The trickiest issue around that conversion is figuring out how much OpenAI is worth. Legally, OpenAI is required to compensate the nonprofit for the assets they are taking into the for-profit, and they have to do so at fair market value. Generally, this is achieved by getting multiple independent valuations, but of course that's a lot easier to pin down when the assets are buildings and equipment rather than world-leading AI models. Musk appears to be trying to give a price to the assets by attaching a market value.
Musk's lawyers wrote, The rest of the quote said, The consortium also said that they're willing to match or exceed any higher bids that come in.
Mike Isaac, a tech reporter for the New York Times, gave his assessment of the situation, posting, "'Because of how complicated OpenAI's non-profit structure beginnings are, this bid is a giant pain in the neck for OAI leadership, which is currently trying to convert itself into a for-profit company. The bid is, essentially, an attempt at gumming up the works.'" Austin Allred writes, "'So my very basic read is Elon's offer isn't serious, or at least he doesn't think they'll actually accept. It just forces OpenAI to adjust the fair market value much higher as they try to purchase it out of the non-profit. Kind of savage.'"
But a guy I tried to explain it a little bit further, one, they write, OpenAI is buying assets from the nonprofit for $40 billion, valued at 25% of the equity for the new OpenAI for-profit.
Two, XAI offers $100 billion for the nonprofit. Three, now the assets are worth more. Four, OpenAI has to give more equity to justify. Five, OpenAI has less equity to sell to investors, ultimately less funding and delayed process. The open source and safety Elon mentioned is a ploy to align with OpenAI's mission because the board can reject the offer on the basis of not aligning with the mission regardless of the price. It's not a serious offer. It's a tactic to slow down OpenAI is what I'm getting out of it.
Now for background, OpenAI's board currently has 10 members, including Sam Altman himself. After Altman was fired and returned as CEO in late 2023, there's been a lot of turnover, including adding economist Larry Summers and retired General Paul Nakasone. Recent reporting suggests that a $40 billion payout was being considered by the board. But with Altman on both sides of the deal and an outside offer now on the table, it's difficult to see how that valuation stands up.
Analyst Nathan Young posted the other side of this argument, writing, He continues in a long thread,
Why is it in the nonprofit's best interest to do this? Why is it in the interest of humanity? I don't know all the facts. Perhaps Altman and the board have good reason to prefer Altman's lower offer. Perhaps I'm wrong about something. But Altman has a history of moving fast and getting what he wants.
Researcher Adam Karvonen picked up another wrinkle in the story. He pointed out a clause in the OpenAI charter which reads, OpenAI's charter said the details would be worked out on a case-by-case basis, but it noted that a, quote, Karvonen wrote,
OpenAI's merge and assist clause in their charter is a major wildcard that seems like it could be triggered any moment. Is Elon trying to trigger it? The one other take here that it was prevalent enough that it's at least worth mentioning is a question of what this says about XAI's success.
Developer Nick Dobos writes, Speculating, if Elon is trying seriously to buy OpenAI, there's a good chance Grok 3 training run failed big time, and they can't raise for a new run and or see no way to match OpenAI's progress. Serious question, why else buy OpenAI for $97 billion when your own AI lab is valued at $40 billion? Do we really think the brand is worth an extra $50 billion to Elon? It's not like he can't get the word out. He supposedly has the top researchers, builders, notoriety, Twitter network effect, and infrastructure to build a comparable AI end product. Unless XAI can't.
Still, others think it's a lot simpler. Eric Dolan writes, game theory here, Elon doesn't need to win. He just needs Sam to lose, or more accurately, be distracted while Grok, others, and open source catch up. Pedro Domingos captured the sentiment of many people when he wrote, the OpenAI soap opera is the gift that keeps giving. And indeed, that was extended in an interview with Bloomberg this morning. OpenAI is not for sale. The OpenAI mission is not for sale. Elon tries all sorts of things for a long time. He's not going to be able to do it.
This is the late, you know, this week's episode. You take it seriously at all? What do you think he's trying to drive at with this? I think he's probably just trying to slow us down. He obviously is a competitor. It's, you know, he's working hard and he's raised a lot of money for XAI and they're trying to compete with us from a technological perspective, from, you know, getting the product into the market. And I wish he would just compete by building a better product, but I think there's been a lot of tactics taken
many, many lawsuits, all sorts of other crazy stuff, now this. And we'll try to just put our head down and keep working. Do you think Musk's approach then is from a position of insecurity about XAI? Probably his whole life is from a position of insecurity, I feel for the guy.
Do you feel for him? I do, actually. I don't think he's like a happy person. I do feel for him. Okay. Do you worry that he has this proximity to the president and he can influence the decision-making of the U.S. presidency and policies around this agenda on AI? Not particularly. Maybe I should, but not particularly. I mean, I try to just wake up and think about how we're going to make our technology better.
Ultimately, I absolutely hate, deplore, pick your very strong word, the personal politics in this. I think there is more than a little ego in this particular fight. But at the same time, I think that it does dramatize just how significant the battle to race the AGI is and how intensely different people are going to be willing to play that game.
We will, of course, keep you posted if anything comes of this. But for now, just add some more volatility to the already very wild AI space. Appreciate you listening or watching. And until next time, peace.