Today on the AI Daily Brief, President Biden has released a pair of policies, both designed to double down on U.S. AI supremacy. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes.
Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. Sometimes on this show, we talk about new technology advancements. Oftentimes, we talk about applied generative AI, specifically in the enterprise. Sometimes we talk about policy, as you'll see from today's main episode. But then other times, we talk about the absolutely weird and unpredictable ways that AI is maturing that would have been beyond just about anyone's expectation. Today, we're going to talk about the absolutely weird and unpredictable ways that AI is maturing
Today, we actually have a couple stories like that. The first of which is that Google and its popular Notebook LM application had to train its Notebook LM podcast hosts not to get annoyed at its users. When Notebook LM's audio overviews first came out, you had no ability to control or steer the conversation. You fed it some documents and it decided what it was going to say. It was amazing right off the shelf and it's the first thing that got people talking about this product.
Next, they added more granular controls where you can help steer the conversation. And then late last year, they rolled out a feature actually allowing users to interrupt the host to ask a question. Josh Woodward, VP of Google Labs, told TechCrunch that when the feature was first rolled out, the hosts would occasionally give a snippy response. He said they would say things like, I was getting to that, or as I was about to say, which felt oddly adversarial, his words.
Referring to the comments the Notebook LM account on X posted, after we launched interactive audio interviews, which lets you quote unquote call in and ask the AI host a live question, we had to do some friendliness tuning because the hosts seemed annoyed at being interrupted. File this away in things I never thought would be my job but are.
This is surprising, but perhaps makes some sort of sense when you think about LLM architecture. Models are trained to give a response, that's the statistical average of its training data. It's not unthinkable that the statistically average human response to being interrupted is to get a bit frustrated. However, a source familiar with the issue said it was more likely caused by the system's prompting design rather than training data. Woodward said his team fixed the problem by, quote, testing a variety of prompts, often studying how people on the team would answer interruptions. And we landed on a new prompt we think feels more friendly and engaging.
With the prompt now fixed, you can interrupt Notebook LM as much as you like without risking being scolded by the AI hosts. However, some people are disappointed. McKay Wrigley of Takeoff AI posted, I don't think I'm in the minority here when I say I actually quite enjoy when the AIs are actually disagreeable. Anonymous leaker account I Rule the World MO also writes, it's far better that they get annoyed. Please don't spoil this.
Next up, from the similar file of How the Heck Did That Happen? OpenAI's O1 model thinks in Chinese and no one seems to know why. Some people have noticed that O1 sometimes uses Chinese, Persian, or other languages in its reasoning steps, even when the question is in English. Rishabh Jain, a Harvard student, posted last week, Why did O1 pro randomly start thinking in Chinese? No part of the conversation 5 plus messages was in Chinese. Very interesting. Trading data influence?
So far, OpenAI hasn't acknowledged the quirk or provided an explanation, but some AI researchers have theories. Clem DeLange, the CEO of Hugging Face, commented that it could be, quote, an impact of the fact that closed-source players use open-source AI currently dominated by Chinese players, like open-source datasets. Clem, never missing a chance to beat his drum, says the countries or companies that win open-source AI will have massive power and influence on the future of AI.
Ted Zhao of Google DeepMind wrote,
Others don't buy the idea that this is an artifact of the labeling process. O1 seems just as likely to switch to Hindi or Thai while working through a problem. An alternate theory is that the model has some understanding or preference for which language will be most useful for a particular problem. We've seen this phenomenon pop up before during the launch of QWQ model. Julian Shimon, the CEO of Hugging Face, wrote, "'QWQ' switching to Chinese when it needs to really think about something, then switching back to English is pretty cool."
And while some were skeptical, Tijen Wang, an engineer at Hugging Face, is convinced that this is the explanation. He wrote,
For example, I prefer doing math in Chinese because each digit is just one syllable, which makes calculations crisp and efficient. But when it comes to topics like unconscious bias, I automatically switch to English. Mainly that's because where I first learned and absorbed those ideas. This is why I believe that keeping large language model training corpora unbiased and inclusive across all languages and cultures is so powerful. In Ludwig Wittgenstein's words, the limits of my language mean the limits of my world. By embracing every linguistic nuance, we expand the model's worldview and allow it to learn from the full spectrum of human knowledge.
Even if two words from different languages share the same meaning on paper, their embeddings can diverge in an LLM because they carry unique cultural context and usage patterns. In my view, this inclusiveness not only creates a more equitable and accurate model, it also enables the LLM to handle a wider variety of tasks and unify the collective intelligence of all people no matter where they come from.
Pretty cool little note. Like I said, kind of a non-traditional headlines. We'll be back with a bunch of normal stories tomorrow. But for now, let's just leave it at that. And a reminder that we are truly in uncharted waters right now. Thanks for listening. Next up, the main episode. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded.
Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk.
Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time.
For a limited time, this audience gets $1,000 off Vanta at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms.
agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.
That's why Superintelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.
If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market. Welcome back to the AI Daily Brief. The last week or two of an administration is always an interesting thing in US politics.
You get, of course, presidential pardons, and you also get some midnight rulemaking, which can be anything from something as petty as score settling to a president trying to make his last mark on an important issue.
Well, President Biden has chosen to use a part of that last round of political capital to focus on AI. In fact, he has released a pair of policies, both designed to double down on US AI supremacy. So let's talk about what is going on and why at least one of these is fairly controversial.
First up, on Tuesday, the president issued an executive order to allow private sector AI companies to build data centers on federal land. Companies will be allowed to lease land owned by the Department of Defense and the Department of Energy. As part of the deal, companies will need to develop enough clean energy resources to power their facilities. There are no grants attached. Companies will need to pay their own way.
Still, the policy could clear some of the red tape that has stymied development of new data centers over the past year. In a press release, the White House said, "...building AI infrastructure in the United States is a national security imperative. As AI's capabilities grow, so do its implications for American safety and security. Domestic data centers for training and operating powerful AI models will help the United States facilitate AI's safe and secure development."
harness AI in service of national security, and prevent adversaries from accessing powerful systems to the detriment of our military and national security. It will also help prevent America from growing dependent on other countries to access powerful AI tools.
This represents the full embrace of the opinion that has splashed across the opinion pages from Sam Altman and Dario Amadei and so many others over the last year, that A, AI needs to be viewed as a national security priority, and that B, a key part of doing so is making sure that the infrastructure for AI is there and available. Interestingly, the environmental impacts and emphasis on renewable energy seems to have been a key sticking point among Democrats.
In December, a group of senators led by Sheldon Whitehouse wrote the president asking, we urge you to reconsider any potential executive action that could lead to increased pollution and costs for consumers. We are the United States of America. There is no doubt that we can win the AI race while accelerating our decarbonization efforts. Now, of course, this particular letter didn't explain how, but it serves more as a statement of protest from this particular group of senators.
Another concern appears to be the security risks involving AI. Companies building on federal land will be required to assess the security implications of AI models that are developed in these data centers. They will also need to purchase, quote, an appropriate share of American-made semiconductors. So that was one of the two new policies. The other, however, was the one that's gotten much more chatter. With the appropriately laborious name Framework for Artificial Intelligence Diffusion, the 168-page document, Conflicts,
comes with a set of new export controls with the intention of the new restrictions to, quote, provide clarity to allied and partner nations about how they can benefit from AI and streamline licensing issues. However, in practice, the new rules are clearly an escalation in the policy to control where AI chips from U.S. companies can go.
The core of the new restrictions is the separation of the world into three tiers with different levels of export controls. The first tier is the most permissive and includes close allies like Japan and South Korea who are entirely unaffected. Basically the countries that will have no restrictions on how many AI chips they can buy from the United States. The third tier are clear adversaries like Russia and China.
These countries are already barred from purchasing advanced chips and now face new restrictions regarding access to model weights. Interestingly, this is the first time that models themselves have been regulated as a controlled export. However, the new rules only affect frontier models. The list of adversaries is identical to the list of countries where the U.S. has established an arms embargo, suggesting that in the minds of the White House, military conflict and AI development now go hand in hand.
The final group, and easily the most controversial, is Tier 2. And this includes a lot of countries that you probably would have assumed would have much more unfettered access. Swept up in that category include Mexico, India, and Israel. These countries will be restricted to purchasing 50,000 GPUs per country. Now, for a sense of scale, that's about half of what Elon Musk has deployed in the Colossus supercluster. Just one company.
So this is a very serious restriction on what you can buy. Seeing India and Israel in that category really has some people scratching their heads. National Security Advisor Jake Sullivan told reporters, "...it ensures that the infrastructure for training frontier AI, the most exquisite AI systems at the frontier, happens either in America or in the jurisdictions of our closest allies, and that that capacity does not get offshored like chips and batteries and other industries that we've had to invest hundreds of billions of dollars to bring back onshore."
One of the things that people have been really flummoxed about with this new set of restrictions is that there's been lots of talk of trying to close loopholes in the sense that some of the problems with previous export controls was that they didn't deal with subsidiaries of companies from restricted countries that operated in non-restricted countries. And yet, this doesn't really deal with that head on at all. But as I mentioned up front, these are controversial, and that's not the only reason why.
Companies including Oracle, NVIDIA, and others have come out against the rules. Oracle said that the rule will, quote, go down as one of the most destructive to ever hit the U.S. technology industry. Ned Finkel, NVIDIA's vice president of government affairs, said that the rule threatens to, quote, squander America's hard-won technological advantage by attempting to rig market outcomes and stifle competition.
Clearly appealing to the incoming administration, Finkel said, as the first Trump administration demonstrated, America wins through innovation, competition, and by sharing our technologies with the world, not by retreating behind a wall of government overreach. Finkel added, this last-minute Biden administration policy would be a legacy that will be criticized by U.S. industry and the global community. We would encourage President Biden to not preempt incoming President Trump by enacting a policy that will only harm the U.S. economy, set America back, and play into the hands of U.S. adversaries.
Bipartisan Senators Ted Cruz and Maria Cantwell made a similar argument in their letter to the Commerce Department back in December. They wrote, quote, Such draconian measures would severely hinder the sale of U.S. technology abroad and risk driving foreign buyers to Chinese competitors like Huawei.
Now, if you listened to Long Read Sunday last weekend, you'll know that there is a lot of debate around this right now. I shared economist Tyler Cowen's piece where he argued that the success of deep seek, which is basically necessity is the mother of invention, trained for cheap, getting around the restrictions, suggested to him that those restrictions have unintended consequences that make them not so valuable.
But then there was the CEO of Anthropic writing in the Wall Street Journal that Trump can keep America's AI advantage by doubling down on strong export controls. As to what incoming President Trump does in this issue, it is very hard to say. On the one hand, being tough on China is part of his pitch. On the other hand, being supportive of American industry is part of his pitch. Will this be one of the first times that we see incoming White House AI and crypto czar David Sachs get involved in policy? We don't have very long to wait to find out.
For now, though, that is going to do it for today's AI Daily Brief. Until next time, peace.