OpenAI's 'Economic Blueprint for America' outlines strategies to secure U.S. leadership in the AI era by driving competitiveness, innovation, and reindustrialization. Key goals include recommending federal AI policies, streamlining regulations, and developing infrastructure. The blueprint emphasizes creating AI economic zones, fostering educational initiatives, and establishing global safety standards to ensure AI benefits are widely shared.
OpenAI argues that federal AI policies and streamlined regulations are essential to prevent a patchwork of state and international rules that could hinder U.S. competitiveness. They advocate for a unified approach to ensure frontier AI models promote economic and national security, reduce bureaucratic obstacles, and foster government-industry collaboration.
Infrastructure is central to OpenAI's vision, as they believe building AI infrastructure—such as chips, data centers, and energy systems—will catalyze U.S. reindustrialization and global competitiveness. They warn that without rapid investment, funds may flow to projects backed by China, undermining democratic AI ecosystems. OpenAI proposes AI economic zones and streamlined permitting to accelerate infrastructure development.
OpenAI acknowledges that AI will transform the workforce, particularly in roles involving routine tasks. However, they emphasize that AI will not eliminate jobs entirely but will lead to workforce adaptation. They advocate for training programs to cultivate AI talent, especially in regions that have not benefited from previous innovation waves, ensuring widespread economic opportunity.
OpenAI focuses on 'rules of the road' to address immediate AI risks, such as deepfakes and child safety. They propose applying provenance data to AI-generated content and empowering users to control how their personal data is used. These measures aim to build trust and ensure AI tools are used responsibly, addressing current regulatory concerns rather than speculative existential risks.
OpenAI's call for a nationwide AI education strategy aims to prepare the U.S. workforce for the AI-driven economy. By increasing federal spending on education and training, they seek to cultivate AI talent across the country, particularly in underserved areas. This strategy is crucial for ensuring that the economic benefits of AI are widely distributed and that the U.S. maintains its competitive edge.
OpenAI warns that without swift action, China could dominate AI development by channeling global investments into its projects. They advocate for U.S.-led infrastructure projects and international coalitions to establish democratic AI ecosystems. By prioritizing AI infrastructure and global collaboration, OpenAI aims to counter China's influence and ensure U.S. leadership in AI.
Thank you.
Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. We're going to talk about this a little bit in the main episode, but Zuckerberg has been on a media tour defending the decision of Meta to stop fact-checking. And as part of a larger conversation with Joe Rogan, he discussed how AI could impact engineering roles at companies like Meta.
On that show, he said, Business Insider continues, Meanwhile, Bloomberg is predicting that Wall Street could lose as many as 200,000 jobs to AI over the next three to five years.
This comes from a survey of chief information and technology officers surveyed by Bloomberg Intelligence, who said that on average, they expect a net 3% of their workforce to be cut. They honed in on back office, middle office, and operational roles, as well as customer service changes.
Bloomberg intelligence analysts wrote, any jobs involving routine repetitive tasks are at risk, but AI will not eliminate them fully, rather it will lead to workforce transformation. One of the big things that we are watching for, certainly at Superintelligent, is to what extent this plays out on a task-by-task basis versus a role-by-role basis. It's
It's clear that for some time AI is going to be better at certain tasks than it is at entire roles, and that gives a window, even outside of the general human and corporate inertia which will also slow things down, where the design of jobs might change fundamentally to adapt to this new reality. There's definitely bullishness in this report. 80% of respondents said that they expect generative AI to increase productivity and revenue generation by at least 5% in the next three to five years, ultimately just a reflection of the increased discourse around this particular question.
An interesting article out of TechCrunch, that publication surveyed 20 venture capitalists in an attempt to figure out what gives an AI startup a moat. AI startups took in $100 billion in venture capital dollars last year, almost a third of all fundraising. And there are big questions around what gives companies defensibility. Even OpenAI has maybe its strongest moat based on brand right now, rather than having a huge lead in model sophistication, especially not if you consider how fast other companies catch up.
Responding to TechCrunch's survey, almost half of VCs said the thing that gives AI startups a moat is the quality of their proprietary data. In terms of trying to get specific around what might give someone a moat, Jason Mendel from Battery Ventures said, "...I'm looking for companies that have deep data and workflow moats. Access to unique proprietary data enables companies to deliver better products than their competitors, while a sticky workflow or user experience allows them to become the core system of engagement and intelligence that customers rely on daily."
Scott Beechuk, a partner at Norwest Venture Partners, said that proprietary data is especially important for startups trying to build vertical solutions, which is obviously a key part of the emerging agent market.
I noticed this article because it's also reflected in a lot of the chatter that I'm seeing in places like Twitter, where first round partner Liz Wessel writes, it feels like once a month I hear of yet another startup that claims to be building AI sales reps or SDRs. They all get to 1 million annual recurring revenue in impressive time, one month, three months, and then stall out later due to insanely high and unsustainable churn. Curious to see which of these companies is still around in three years and have managed to retain customers and how.
The piece of this that I'm most interested in subjectively is definitely on the high end. This is one thing that we talk about with enterprises all the time. Are the models themselves completely commoditized? And if so, what reason do you have to make different decisions about who you work with?
It's extra interesting now heading into the era of agents as companies are going to be forced to decide, do we go with a highly specialized vertical solution, maybe from a smaller company, or do we think that generalist agents from the big frontier labs are just going to take all of that out in so little time that it doesn't make sense to invest in an intermediate solution? These are the kind of decisions that people are weighing back and forth all the time right now, making it an extremely dynamic space.
On that theme of proprietary data, Bloomberg reports that AI labs are paying up for unused video footage shot by content creators. They report that OpenAI, Google, and Moon Valley have been paying hundreds of YouTubers for access to their unpublished videos. The companies are paying between $1 and $4 per minute of footage, with high-fidelity drone and 3D animation videos attracting a premium. The video is considered valuable for training data as it hasn't been posted online and therefore isn't contained in existing training sets.
Now, the obvious conclusion of this is that the labs are already at a point where every video on the internet has been ingested. It also implies that all of that publicly available video isn't enough to hit the scaling limit of pre-training, as seems to be happening with language models. Dan Levitt, Senior Vice President of Creators at talent agency Wasserman said, It's an arms race and they all need more footage. I see a window in the next couple of years where licensing footage is lucrative for creators who are open to doing so. But I don't think that window is going to last that long.
Finally today, lots of people are excited to see OpenAI appear to be building out their new robotics and consumer hardware division. Two months ago, the company scooped Caitlin Kalinowski, a veteran hardware designer who most recently led the Orion AR Glasses team at Meta. OpenAI has now posted a string of robotics-focused jobs ads to build a team around Kalinowski.
They're looking for a systems integration electrical engineer to, quote, help us with the design sensor suite for our robots, mechanical robotics product engineer to create gears, actuators, motors, and linkages for robots. And the job listings also describe the overall goal, stating, our robotics team is focused on unlocking general-purpose robotics and pushing towards AGI-level intelligence in dynamic real-world settings. Working across the entire model stack, we integrate cutting-edge hardware and software to explore a broad range of robotic form factors. We strive to seamlessly blend high-level AI capabilities with the physical constraints of physical.
My guess is that in 2025, as agentic starts to come online, we're going to start then to hear about embodied agentic in the form of robots and have these industries converge quite a bit more.
One thing we didn't get with this announcement is any information around whether any of this has to do with the potential collaboration between Sam Altman and Johnny Ive. Will that actually turn into anything? We will just have to wait and see. For now, that's going to do it for today's AI Daily Brief Headlines edition. Next up, the main episode. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded.
Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk.
Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time.
For a limited time, this audience gets $1,000 off Vanta at vanta.com slash NLW. That's V-A-N-T-A dot com slash NLW for $1,000 off.
If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms, agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.
That's why Superintelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.
If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market.
Welcome back to the AI Daily Brief. We are about a week away from the transition between the Biden administration and the second Trump administration, and there is definitely a bunch of jockeying and repositioning going on. The Information wrote about this at the end of last week. That piece they called Amazon Downplays DEI, Meta Plays Up Free Speech as Tech Tilts Right.
Now, the specific catalyst for that was Mark Zuckerberg of Meta announcing that they would be ending their relationship with fact-checkers and moving to a community fact-checking approach. And Zuckerberg even went on Joe Rogan to defend the position after it became controversial, reinforcing the idea that the company had faced what he called massive institutional pressure to basically start censoring content. But in the AI space specifically, there is definitely a meta-conversation starting to happen between the big labs and incoming President Donald Trump, even if he's not aware of it.
For Long Read Sunday this week, one of the pieces we read came from Anthropic CEO Dario Amadei, who published a piece in the Wall Street Journal called Trump Can Keep America's AI Advantage. Now, we paired that with a piece by Tyler Cowen about how the recent Chinese model from DeepSeek made him reconsider just how effective chip export prohibitions and other pillars of AI policy vis-a-vis China would actually be. And interestingly, in an interview around the release of this new piece from OpenAI,
Chris Lehane, who runs policy at OpenAI, technically he's their VP for global affairs, said that the release of that model, which was an open source model getting near O1 performance and claimed to have been trained for just $5.5 million, was something that they had taken notice of as well.
So what we got this morning was a much more comprehensive approach to this conversation from OpenAI. The piece is called AI in America, OpenAI's Economic Blueprint. It runs 15 pages long and sets out a policy agenda that expands upon many of the ideas that have shown up in the op-ed pages over the last six months or so. In his forward letter, Chris Lehane writes, We believe America needs to act now to maximize AI's possibilities while minimizing its harms. AI is too powerful a technology to be led and shaped by autocrats,
But that is the growing risk we face, while the economic opportunity AI presents is too compelling to forfeit. And so, of course, here we have echoes of Sam Altman's piece in the Washington Post from back in July, who will control the future of AI? A democratic vision for artificial intelligence must prevail over an authoritarian one. Lane continues, Shared prosperity is as near and measurable as the new jobs and growth to come from building the needed infrastructure. Soon, AI will help our children do things we can't.
Not far off is a future in which everyone's lives can be better than anyone's life is now. And so they say the goal of this document is to work with policymakers to make sure that that future comes to fruition. And indeed, this is not just a policy appeal. This is appeal to an American vision of AI. By way of historical example, Lehane discusses why automobiles didn't take root in Europe where they were invented.
He writes, "...in the United Kingdom, where some of the earliest cars were introduced, the new industry's growth was stunted by regulation. The 1865 Red Flag Act required a flag bearer to walk ahead of any car to warn others on the road and wave the car aside in favor of horse-drawn transport."
How could a person walk in front of a car without getting run over? Because of another requirement, that cars move no faster than four miles per hour. America, he says, took a very different approach to the car, merging private sector vision and innovation with public sector enlightenment to unlock the new technology and its economic, and ultimately, with World War I looming, national security benefits.
So they say, the incoming administration has the chance to, one, continue the country's global leadership and innovation while protecting national security, two, make sure we get it right on AI access and benefits from the start, and three, maximize the economic opportunity of AI for communities across the country.
So what are some of the specifics? Section one is called competitiveness and security. And basically this says the federal government needs to clear the way by preempting state by state regulations in order to allow the AI industry's development of frontier models to quote, best ensure that they promote US economic and national security. This is something that Allman started talking about during SB 1047. And part of the answer that OpenAI gave as to why they didn't support that legislation, which was California specific.
They write in this piece that they want the federal government to, quote, develop alternatives to the growing patchwork of state and international regulations that risk hindering American competitiveness, such as by having the federal government leading the development and national security evaluations at home and establishing a U.S.-led international coalition that works towards shared safety standards abroad. They say that, quote, the federal government's approach to frontier model safety and security should streamline requirements.
reduce bureaucratic obstacles to government-industry collaboration, and incentivize companies to support U.S. competitiveness. Some of the things they say the government could do, including supporting the development of standards and safeguards —
helping companies access secure infrastructure, create a defined voluntary pathway for companies that develop LLMs, to work with government to define model evaluations, test models, and exchange information. I'm sure that voluntary word is going to be a point of consternation as we figure this out and quite a point of debate. And they also flagged that the government could, quote, help develop training programs to cultivate the next generation of AI talent in the US, especially in areas of the country that have not benefited from previous waves of innovation.
The next section is about rules of the road. And this is the core of OpenAI advocating for basically common sense regulations. They hone in on child safety issues. They discuss deepfakes, if orthogonally, by talking about how to apply Providence data to all AI-generated audiovisual content. And they say, quote, people should be empowered to personalize their AI tools, including through controls on how their personal data is used.
Interestingly, this piece seems to have learned a lesson from the debate around SB 1047, where they focus their rules of the road section on concerns that regulators and lawmakers have right now, including things like deep fakes and abusive minors, as opposed to concerns that might be for the future in terms of the more existential risk type of issues.
The last piece of the story is what they call infrastructure as destiny. And this is a drum that obviously Sam Altman has been beating very loudly for some time now. OpenAI writes, we believe that building enough infrastructure is not just vital for ensuring that AI around the world is based on US rather than China-based technology. It's an unmissable opportunity to catalyze a reindustrialization of the United States. Successful nations turn resources into competitive advantages. In
In the AI era, chips, data, energy, and talent are the resources that will underpin continued U.S. leadership. And as with the mass production of the automobile, marshalling these resources will create widespread economic opportunity and reinforce our global competitiveness. Basically, they say that there is a win-win, a two-for-one available to us here. That to win the AI race, we have to build out the infrastructure, and that to build out the infrastructure, we necessarily have to create tens of thousands of skilled trade jobs.
They note that, quote, today demand for compute and energy far outstrips the available supply, while an estimated $175 billion in global funds is weighted to be invested in AI infrastructure. They have a warning. If the U.S. doesn't move fast to channel these resources into projects that support democratic AI ecosystems around the world, the funds will flow to projects backed and shaped by the CCP. Now, they share tons of ideas, which read basically like thought starters and things that individual politicians could pick up on and really run with.
For example, AI economic zones that, quote, significantly speed up the permitting process for building AI infrastructure like new solar arrays, wind farms, and nuclear reactors. They call for a nationwide AI education strategy, end quote, dramatically increased federal spending on power and data transmission and streamlined approval for new lines. So this is clearly just an opening salvo.
What's interesting to me about it is the fact that it so clearly represents the sense that this is a moment of opportunity and an important inflection point. OpenAI is backing this up by hosting a gathering in Washington, D.C. on January 30th to, quote, preview the state of AI advancement and how it can drive economic growth.
There are so many different organizations and interests that are hoping for much out of Trump's first 100 days. I'll be very interested to see if and where AI hits on that agenda, if at all. There's certainly going to be plenty of discourse towards that direction, and I will cover it here as it becomes important. For now, that's going to do it for today's AI Daily Brief. Until next time, peace.