Sundar Pichai emphasized that 2025 is a critical year for Google, urging the company to internalize the urgency of the moment and move faster. He highlighted the need to unlock the benefits of AI technology and solve real user problems, especially as Google faces antitrust lawsuits and competition from rivals like OpenAI's ChatGPT.
Google is dealing with multiple antitrust lawsuits, including a U.S. court ruling that it maintains a monopoly over search and a pending ruling on its advertising business. The DOJ has requested Google to divest its Chrome browser division, and the UK competition watchdog has raised concerns about its ad tech practices.
Google plans to scale Gemini on the consumer side, aiming to make it the next app to reach half a billion users. Executives see Gemini as a top priority, with DeepMind co-founder Demis Hassabis pledging to 'turbocharge' the app and build a universal assistant capable of operating across any domain, modality, or device.
XAI missed its release schedule for Grok 3, which was intended to rival OpenAI's GPT-4.0 and Google's Gemini 2.0 Flash. Speculation suggests that scaling laws may have hit a wall, or Elon Musk may have overpromised. Instead, XAI is reportedly releasing Grok 2.5, indicating potential challenges in scaling or team size.
DeepSeek V3 is an open-source ultra-large model with 671 billion parameters, outperforming Meta's Lama 3.1 405b and nearing the performance of leading models from OpenAI and Anthropic. It uses a mixture of experts architecture to reduce inference costs, with training reportedly costing only $5.5 million, a fraction of Western rivals' expenses.
OpenAI is converting to a PBC to balance shareholder interests with its mission to ensure artificial general intelligence (AGI) benefits all of humanity. The move allows the company to issue ordinary shares while maintaining a nonprofit arm. This transition is driven by the need for significant capital to scale AGI development, as donations alone are insufficient.
Legal challenges include opposition from Elon Musk and Meta, who argue the conversion has seismic implications for Silicon Valley. AI safety advocates like ENCODE and Jeffrey Hinton warn that AGI should be controlled by a public charity prioritizing safety, not a for-profit entity. Critics also question governance details and guardrails to ensure public benefit.
OpenAI and Microsoft have agreed on a straightforward definition of AGI: systems capable of generating $100 billion in profit. This agreement ensures Microsoft retains access to OpenAI's technology for years, avoiding the risk of OpenAI's board revoking the deal by declaring AGI achievement.
Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes.
This is the first episode of 2025, of course, and I had planned on using the chance to catch up on all the news that had happened over the past couple weeks, but really there has been very little. It was clearly a good time to go on break. Still, there were a few interesting things, and that's what we're going to be covering today, starting with the fact that Google CEO Sundar Pichai has made it clear that AI is the company's focus in what is a crucial year. CNBC has obtained leaked audio of a strategy meeting held the week before Christmas, and
reportedly complete with ugly holiday sweaters. Pichai told staff, quote, I think 2025 will be critical. I think it's really important we internalize the urgency of this moment and need to move faster as a company. The stakes are high. These are disruptive moments. In 2025, we need to be relentlessly focused on unlocking the benefits of this technology and solve real user problems.
Google is also in a challenging moment. They are in the midst of multiple antitrust lawsuits, with U.S. courts having already ruled that the company maintains a monopoly over search, with another ruling on their advertising business expected early this year. The DOJ has requested an order that the company divest of their Chrome browser division. Meanwhile, the British competition Watchdog has issued a statement of objections over Google's ad tech practices. The regulator made a provisional finding that Google was impacting competition in the U.K.,
Pichai said, Pichai said that Google needs to focus on building, quote, big new businesses as a top priority.
Their AI platform, Gemini, was the number one candidate, with executives stating it could be the next Google app to hit a half billion users. Fifteen Google apps have hit that milestone over the years. Pichai said, Scaling Gemini on the consumer side will be our biggest focus next year.
At that meeting, the CEO showed a chart of Gemini's competition, with OpenAI's ChatGPT the number one rival. Pichai acknowledged that Google had to play catch-up, stating, In history, you don't always need to be first, but you have to execute well and really be best in class as a product. I think that's what 2025 is all about.
The executive team took questions from staff, with one employee noting that branding is one of the major challenges. They recognize that ChatGPT is becoming synonymous to AI the same way Google is to search. DeepMind co-founder Demis Hassabis stepped in to respond, pledging to, quote, turbocharge the Gemini app. He suggested that AI products are going to, quote, evolve massively over the next year or two. Hassabis described his goal of building a universal assistant that can, quote, seamlessly operate over any domain, any modality, or any device.
Responding to another question, Hassabis said that they had no plans to offer ultra-premium $200 subscriptions as OpenAI have done. Throughout the presentation, Bichai referred to the need to, quote, stay scrappy. This seems to be an acknowledgement that the CEO was asking employees to do more with less, given that Google's headcount is down 5% since 2022. He said, In early Google days, you look at how the founders built our data centers. They were really, really scrappy in every decision they made. Often constraints led to creativity. Not all problems are always solved by headcount.
So ultimately, none of this is particularly surprising, but still really interesting to hear directly how they are thinking about this particular set of issues.
Next up, XAI joins the long list of AI companies failing to ship their latest frontier model. Over the summer, Elon Musk said that Grok 3 would arrive by the end of 2024. He posted Grok 3 end of year after training on 100,000 H100 should be really something special. Grok 3 will be the first model trained on the Colossus supercluster and could well be the first in the world trained on a cluster of that size. It was intended to rival OpenAI's GPT-4.0 and Google's Gemini 2.0 Flash, being the first XAI model to truly compete on the bleeding edge.
Aside from missing the release schedule, it also seems that Grok 3 won't even be the next model rolled out by XAI. Tibor Blaho picked up a snippet of code suggesting that Grok 2.5 will be coming soon.
Now, ultimately, this is less interesting to people as some sort of indicator of particular challenges at XAI or anything like that, and more just that it continues the pattern of setbacks with flagship models that's become normal over the last few months. Some are going so far as to speculate that Grok 3 may have proven that scaling laws have hit a wall. Until very recently, the logic was that throwing more data and more compute at a training run would yield more performant models.
As the first company publicly known to have an operational trading cluster with 100,000 GPUs, XAI may be in the midst of demonstrating that the way we thought about scaling before might just not be holding up. Then again, it could be that Elon was talking out of turn when he said that it was going to come this year and that this is nothing more than the difficulties of shipping with a small team.
Another big theme we anticipate for 2025 is competition from China. Chinese AI startup DeepSeek has released their new ultra-large model, DeepSeek V3. The model is open source and fully available on Hugging Face. According to benchmarks performed by DeepSeek, this latest model is outperforming Meta's frontier Lama 3.1 405b. It's also pretty close to the performance of leading models from OpenAI and Anthropic. DeepSeek's model has 671 billion parameters but uses novel architecture to cut down on inference costs.
Rather than engaging the entire model, it uses what's called a mixture of experts architecture to only activate certain parameters as needed. DeepSea claims to have used multiple hardware and algorithmic optimizations during their training run. This led to a total cost of around $5.5 million for training, which if true would be a fraction of the amount spent by Western rivals.
Still, there is a little bit of skepticism around the new announcement. Lucas Beyer, a researcher at OpenAI, found that DeepSeq v3 claims to be ChatGPT 4 in 5 out of 8 responses, suggesting the model's extremely large training dataset was generated using ChatGPT.
Sam Altman also seemed to take a veiled swipe at the Chinese lab posting, It is relatively easy to copy something that you know works. It's extremely hard to do something new, risky, and difficult when you don't know if it will work. Individual researchers rightly get a lot of glory for what they do. It's the coolest thing in the world.
Still, at a certain point, all that matters is the results. Didi Das of Menlo Ventures took the new model for a spin, posting, New era.
Finally today, speaking of a new era, as America prepares for a new administration in the White House, Meta is getting ahead of the move with a new policy lead. President of Global Affairs Nick Clegg has stepped down after six years in the role. He will be replaced by one of the company's most prominent Republican executives, Joel Kaplan. Prior to joining Meta, Kaplan served as the White House Deputy Chief of Staff for Policy in the George W. Bush administration. In a post explaining the end of his tenure, Clegg said that Kaplan, quote, is quite clearly the right person for the right job at the right time.
That, however, is going to do it for today's AI Daily Brief Headlines Edition. Next up, the main episode. Today's episode is brought to you by Vanta. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices and establishing trust is more important than ever.
Vanta automates compliance for ISO 27001, SOC 2, GDPR, and leading AI frameworks like ISO 42001 and NIST AI risk management framework, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center all powered by Vanta AI.
Over 8,000 global companies like Langchain, Leela AI, and Factory AI use Vanta to demonstrate AI trust and prove security in real time. Learn more at vanta.com slash nlw. That's vanta.com slash nlw.
If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms, agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.
That's why Superintelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.
If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market. All right, friends. Well, today we are talking about something that has been a long time coming. Of course, we are catching up from news over the last couple of weeks where everyone's been out for the holiday. And a couple of days after Christmas in that netherland between Christmas and New Year's,
OpenAI laid out their plans to convert into a for-profit company. This is something that has long been in the works. As we'll discuss later, there's a ton of legal implications around this and some big battles being fought.
But by way of background, in case you haven't been following the ins and outs of this, currently OpenAI exists as a for-profit organization controlled by a non-profit. Investors and employees are compensated through a capped profit scheme. The plan is to convert the existing for-profit organization into a Delaware Public Benefit Corporation or PBC. This structure will allow the issuance of ordinary shares of stock, but requires the company to balance shareholder interests with stakeholder and public benefit interests.
The public benefit in this case would be OpenAI's mission to, quote, ensure that artificial general intelligence benefits all of humanity. The nonprofit would continue to exist with OpenAI intending to make it, quote, one of the best resourced nonprofits in history. The nonprofit would be granted shares in the PBC at a fair valuation determined independently. This has been one of the sticking points of the conversation, assuming it's allowed to move forward. Figuring out what fair value looks like for one of the most unique startups in history is very challenging.
Their last venture round valued the company at $157 billion, but at the same time, the company is currently burning cash at a significant rate, for example, operating at a $5 billion loss last year. The proposal seems to be an all-stock deal, which would avoid the need to raise over $100 billion in funding to buy out all the nonprofit in cash. Overall, OpenAI positioned the move as a way to enable each arm of the company to operate to its full potential.
They wrote, To some extent, none of this is particularly new.
What's more interesting is how they are starting to articulate and make the argument. They wrote, We began in 2015 as a research lab with a vision that AGI might really happen, and we wanted to help it go as well as possible. In those early days, we thought that progress relied on key ideas produced by top researchers and that supercomputing clusters were less important.
Eventually, it became clear that the most advanced AI would continuously use more and more compute, and that scaling large language models was a promising path to AGI rooted in an understanding of humanity. We would need far more compute and therefore far more capital than we could obtain with donations in order to pursue our mission. Altman has articulated things like this in the past, basically that this was a requirement of trying to achieve their mission of AGI, not a betrayal of some nonprofit ideal.
And in many ways now, they're positioning this as the natural next step. Continuing, they wrote, in 2019, we became more than a lab. We also became a startup. We estimated that we'd have to raise on the order of $10 billion to build AGI. This level of capital for compute and talent meant we needed to partner with investors in order to continue the nonprofit's mission. As we enter 2025, we will have to become more than a lab and a startup. We have to become an enduring company.
Now, this conversion is shaping up to be one of the more controversial legal battles in Silicon Valley history. A legal challenge mounted by Elon Musk, one of the company's first backers, has been dismissed by OpenAI as baseless in a case of sour grapes. But Musk is not alone in his suit. Less easily dismissed is a supporting argument from Meta. In December, Meta wrote to California Attorney General Rob Bonta, urging him to block the conversion. They argued that it would have, quote, seismic implications for Silicon Valley.
They wrote,
AI safety group ENCODE has also joined the fight against the conversion. That group was one of the co-sponsors of California's ill-fated AI regulation bill, SB 1047. In a proposed brief submitted to the court, ENCODE wrote, OpenAI and its CEO, Sam Altman, claim to be developing society-transforming technology, and those claims should be taken seriously. If the world truly is at the cusp of a new age of artificial general intelligence,
then the public has a profound interest in having that technology controlled by a public charity legally bound to prioritize safety and the public benefit, rather than an organization focused on generating financial returns for a few privileged investors. Now, as an aside, that is a very different reason for wanting to block this than what Musk or Meta are bringing to the table. Encode is making an argument about societal priorities, not the current state of the law. That's a fine argument to make, but I'm not sure how much bearing it's going to actually have on the case at hand.
Supporting ENCODE's brief is fellow AI safety advocate Jeffrey Hinton, who wrote, "...open AI was founded as an explicitly safety-focused nonprofit and made a variety of safety-related promises in its charter. It received numerous tax and other benefits from its nonprofit status. Allowing it to tear all that up when it becomes inconvenient sends a very bad message to other actors in the system."
Now, in terms of the response, there really actually hasn't been all that much. Most people already have their opinions on this and have kind of just assumed it's coming one way or another if Elon can't fight it through the courts. Investor MG Siegler writes, OpenAI makes the case to shift into a for-profit. And while you might not like it, the case is actually sound.
On the other hand, some other folks like Miles Brundage, while glad that OpenAI is sharing more of its thoughts publicly, quote, there are some red flags that need to be addressed urgently and better explained publicly before the transition goes through. Brundage, for example, says there is surprisingly little discussion of actual governance details, despite this arguably being the key issue. Brundage wants to know, besides board details, what other guardrails are being put in place to ensure that the nonprofit's existence doesn't seem to let the PBC off too easily with regard to acting in the public interest.
So for those keeping track at home, the opposition to this is a combination of Elon being pissed in whatever interpretation you want to take on that front, Meta suggesting that the standards and implications for other parts of the startup industry are too severe, and safety advocates thinking that AGI shouldn't be the provenance of one company that doesn't have any particular obligation to the rest of the world.
Continuing though, a major reason OpenAI feels the need to convert into a for-profit is some quirks in the company charter around achieving AGI. When the company was founded, strict limits were placed on the commercialization of AGI. The nonprofit board was granted the exclusive power to decide when AGI was achieved with a lot of latitude in how they made that determination. Most importantly, any technology deemed to be AGI would be exempted from licensing agreements signed by OpenAI.
While the safety granted by this arrangement seemed good in theory when the company was founded in 2015, it has led to a lot of headaches more recently. Most notably, OpenAI's deal with Microsoft included a clause that blocked the company from using OpenAI's technology if AGI was achieved, functionally allowing OpenAI's board to revoke the deal by declaring that they considered their latest model to be AGI.
Part of why I think Microsoft spent so much of 2024 shoring up their own internal AI capabilities with the semi-aqua hire of inflection, for example, is that in the wake of the board debacle of late 2023, the idea that OpenAI's board might just randomly pull that card seemed much more likely.
Still, it appears that progress has been made in the context of that specific relationship. According to the information, the two companies have now settled on a common definition of AGI that allows the deal to have certainty. Rather than delving into the philosophy of machine consciousness or the technical ability to deal with problems outside of the training set, their AGI definition is remarkably straightforward. Last year, the companies reportedly signed an agreement stating that OpenAI had achieved AGI when they had developed systems capable of generating $100 billion in profit.
Practically, of course, this means that Microsoft will continue to have access to OpenAI's technology for years, if not decades. The community reaction is pretty much exactly what you'd expect on this. Marco Anastasov says, Still really interesting developments here to end last year, and I think set up for one of the big battles to take place in 2025.
For now, though, that is going to do it for today's AI Daily Brief. Glad to be back with you here on the other side of the holidays. And until next time, peace.