OpenAI has just released a new quote unquote blueprint of what they would like to see in AI regulations. This is coming at a very interesting time where we have a Biden administration in the United States that is on its way out and a Trump administration that is on its way in. So there's kind of like this political shifting of winds. And it seems like this is what OpenAI is essentially going for. This is the opportune time and they're laying out exactly what they would like to see, um,
happen. Now, it's kind of interesting because you can see a little bit of politics at play where they're trying to give Biden his flowers, but also butter up Trump and his administration on certain issues. This is very, very fascinating. And I think it says a lot about what we can expect to see in AI regulation, but also what the AI companies would like. So we're going to dive into all of this. But first, I wanted to mention, if you have
If you ever wanted to grow and scale your company or start a brand new online side hustle using AI tools, you need to join the AI Hustle School Community. We have over 300 members, and the last month we had this at $100 a month. We've dropped $300.
the price for January for New Year's. It is $19 a month to join. Every single week, I record exclusive deep dive videos on tools, tactics, and strategies that I'm using right now to make money with AI and to grow and scale my current businesses, projects, and everything that I'm working on, including this podcast. I share really cool tools. We have an entire classroom with a bunch of different sections. So we have AI marketing growth hacks, different
Amazon influencer side hustles, how to become an AI consultant for any industry, how to create AI music and make money from it, and different hacks for growing podcasts and for creating content. There's a ton, there's dozens of videos in here on all of these different categories.
and things that are really great deep dives that I don't publish anywhere else. So it's amazing content. If you're interested in that, you can check it out. The link is in the description. Again, a community of over 300 members that we all chat and share exactly what we're working on. We'd love to hear from you, and we'd love to get your input on what you think about different people's projects and stuff. So if that's interesting, check out the link in the description. Let's get on to what Opening Eye is doing.
So what was really interesting to me, Opening Eye, they published this in kind of like a blog post. They're calling this their quote-unquote economic blueprint. So this is a living document, meaning it's going to get changed and updated regularly. And it's all of the policies that Opening Eye thinks the U.S. government should be building on. So this includes a forward, which was written by Chris Lane, which is Opening Eye's VP of Global Affairs. And
Essentially, in this, they assert that the U.S. needs to act to attract billions in funding for chips, data, energy, and talent, which is what they're saying is necessary to, quote, win on AI. So I think the whole document here has a handful of different parts. Number one is chips.
data and energy. And what you'll notice about chips, data and energy is this is infrastructure. So really what they're focusing on is what they would like the US government to help with in regards to the infrastructure needed to grow AI. Now, this is coming at a very interesting time because they've just released their O1 model, which we've learned is 20 times
more, uses 20 times more compute than GPT-4.0. And because of this, they need more energy, they need more compute. And if we want to scale up AI, they're like, look, we want to use 100 times or 1,000 times more compute and energy. And then this AI model gets exponentially better. But in order for this to happen, we really need to build up more infrastructure. We need more
energy. We need a lot of resources. And so it seems like this is kind of what they're getting at with this document. So they said, quote, today, while some countries sideline AI and its economic potential, the US government can pave the road for its AI industry to continue the country's global leadership and innovation while protecting national security. So this is good. They're not like, the US government's been failing. They're just like, they're like, look,
Some people are failing, but you guys can be the leader. So it's kind of like sets the stage to make the government the hero. And I think that's probably a good strategy at this point. So.
Repeatedly, they have called on the government to make some big changes. And they're saying that this is because it's very difficult, the current AI regulation environment today in the United States. In 2024 alone, just between all of the different local states, there was introduced about 700 AI-related bills. So some of these conflict with each other. Texas has one called Responsible AI Governance Act. And...
There's a whole bunch of things that they don't like in that, I think. So OpenAI's CEO, of course, Sam Altman, he also criticized existing federal laws, including the CHIPS Act, which is like a pretty popular Democratic kind of supported during the Biden administration bill, which...
I think it's got bipartisan support in essentially that it's giving kind of handouts to companies like Taiwan Semiconductor Company for coming to the United States and building infrastructure, which is critical because if China ever goes and takes over Taiwan, we are doomed. And 90% of the most advanced chips are created there. So the Chips Act is giving them money, and they're now building fabs here in the United States, specifically in Arizona near where I live. So that's pretty cool.
In a recent interview with Bloomberg, Sam Alton was talking about all of this and kind of roasting the CHIPS Act, though. So yeah, all of that to say the CHIPS Act does do some good things, but there's room for criticism. He said that it has not been as effective as any of us hoped and that he thinks there is, quote, a real opportunity, referring to the Trump administration, to
quote, to do something much better as a follow-on. I think this is cool because I think he has the right strategy. You can tell Sam Altman is someone that is, if nothing else, very strategic in his business moves. And so he knows Biden's on his way out. Biden can't do much. Biden can't do anything for him. Republicans are coming into power in the United States with control of the Senate and the House. So the only strategy at this point, and you're seeing this from a lot of big business players, is like, quote,
butter up the Trump administration because if you make an enemy there, you're kind of toast. You're not getting anything out of the Biden administration anymore. So he's like saying – he kind of criticizes the predecessor's bill and then is like, there's something – there's an opportunity for you to do something much better, much better for who, for him, for whatever, but it doesn't matter –
I think he's setting the stage to try to get Trump and his administration to come in and support his vision. So this is what he said. He said, quote, the thing I deeply agree with Trump on is how it is wild, how difficult it has become to build things in the United States, power plants, data centers, and any of that kind of stuff. I understand how bureaucratic craft builds up, but it's not helpful to the country in general, in particular,
It's particularly not helpful when you think about what needs to happen for the US to lead AI and the US really needs to lead AI. Okay. So right there, he's like, look, I agree with Trump on this thing. Even though we know Sam Altman traditionally has been more left-leaning, he's trying to build common ground and some bridges because obviously he doesn't want his company to negatively get impacted from this incoming administration. Okay. Okay.
The big thing they focused on, like I mentioned, is all of this kind of infrastructure. They've talked a lot about nuclear power. And this is coming at a time when, to be fair, both Meta and AWS, big tech giants, have run into issues when trying to scale up nuclear efforts for their data centers. So Microsoft bought like Three Mile Island and is trying to...
Not really. They didn't buy them. So Meta is helping Three Mile Islands. There's a nuclear reactor right next to it that got decommissioned. They're helping to get recommissioned and bring it back online. AWS is trying to do some stuff with...
with, um, in regards to nuclear and meta, interestingly enough, ran into some issues when I believe if I'm not misquoting this, uh, they found a rare B species on the site that they wanted to build a nuclear reactor to help power their data centers. And so it got put on hold and meta is kind of annoyed about this. So,
There's a lot of like you could call it red tape, right? Like a rare bee species where they're trying to build something. They're trying to essentially get stuff built faster. So near term, Opening Eyes Blueprint is proposing that the government develop best practices for model deployment to help streamline that. They're also hoping that it's not going to be limiting their exports to, you
Or they're hoping that it will be limiting the exports of their essentially their AI to adversary nations. So you can imagine China. In addition to all of this, the whole blueprint that they have is essentially encouraging the government to share national security related information.
Briefings on threats to the AI industry with vendors. So they're like, hey, look, we want the inside scoop of what's going on in AI in the industry. Maybe the national security issues. Share it with the private sector. So I think that's kind of interesting.
They said, quote, the federal government's approach to frontier model safety and security should streamline requirements. Responsibly exporting models to our allies and partners will help them stand up on their own AI ecosystems, including their own developer communities, innovating with AI and distributing its benefits while also building AI in the U.S. technology, not tech...
AI on US technology, not technology funded by the Chinese Communist Party. So specifically calling out the CCP over in China as kind of the adversary when it comes to this. Now,
I think this is pretty bipartisan in the United States, or at least I hope it is. We see there's all sorts of new models coming out of China recently. There's a fantastic model that came out of China called DeepSeek. Very fast, trained. It's open source. You can run it locally on your computer. But famously, if you ask it for any criticism of the leader of the Chinese Communist Party, Xi Jinping, it will say, sorry, I can't answer that. And if you ask it about Tiananmen Square, it will deny that it ever happened.
So obviously China is putting its internet censorship into these models and this is kind of cause for concern if these become widely adopted in the United States. So
The OpenAI already has a bunch of partners in the U.S. government, and so I think it's trying to grow. Right now, OpenAI has a deal with the Pentagon for cybersecurity work, a bunch of other stuff. It's also teamed up with Anduril to supply its AI technology to systems that the U.S. military is using to essentially counter drone attacks. So it is working with the government. It is working with the military, but it looks like it is trying to expand
They said, quote, the government can create a defined voluntary pathway for companies that develop AI to work with governments to refine model evaluation, test models, and exchange information to support the company's safeguards. Okay.
So this is a really, really interesting time to kind of see what's going on. They also said, quote, other actors, including developers in other countries, make no effort to respect or engage with the owners of IP rights. Okay, so this is interesting. They're talking specifically about copyrighted material. And this is maybe one of the most interesting things in all of this. They don't want...
to get sued for using copyrighted material. They want it to just become more accessible. And so in regards to this, they're pointing at other countries like China that this doesn't matter. China will grab any data. They don't care about copyright. And so they say, quote, if the U.S. and like-minded nations don't address this imbalance, imbalance meaning China being able to use copyright and not them,
Through sensible measures that help advance AI for the long term, the same content will still be used for AI training elsewhere, but for the benefit of other economies. The government should ensure that AI has the ability to learn from universal publicly available information, just like humans do, while also protecting creators from unauthorized digital replicas. This is really interesting. They're essentially using the example of China stealing everybody's copyrighted data and not caring about it as justification for them to do the same thing. So this is interesting. I know there's two sides of the debate.
But very, very interesting. One thing that I do think is important is to know what OpenAI has been doing in relation to the government. So in the first half of last year, they tripled how much money they were spending on lobbying. They spent $800,000 versus $260,000 for all of 2023. Obviously, as they're becoming a bigger player, they're spending more money. I mean, that's a million bucks in the first half.
half of last year, and this is going to grow. So the company also brought former government leaders into their executives. They have a bunch of ex-defense department officials, NSA chiefs, and they also have formerly the chief economist at the Commerce Department under Joe Biden. So they're bringing in a bunch of government officials. Some people are concerned about that. They just
called someone from BlackRock onto their board. They have people from the CIA working inside of them. So all sorts of people are concerned about that. But it seems like this might be what they're doing, you know, what they have to do to play the game as it were. And, you know,
That's controversial. I'm not saying whether that's good or bad. It seems to be what they're doing. In addition, they're also throwing their weight behind some Senate bills that would establish a federal rulemaking body for AI and provide federal scholarships for AI research and development. They've also opposed bills, in particular California's SB 1047. They were arguing at that time that it was going to slow down AI's innovation and push out talent.
fascinating things happening with OpenAI and the government. And I will keep you up to date on all of it. If you enjoyed the episode today, if you enjoyed the podcast, the number one thing I would appreciate is a review on the podcast. It helps me find amazing guests, helps me cover amazing stories, and motivates me to keep cranking out all this awesome content and sharing it.
everything I'm learning. So if that sounds, if it's been interesting to you, if you could leave a review, I would really appreciate it. Also make sure to check out the AI Hustle school community if you're interested in growing and scaling a business using AI tools. Thanks so much for tuning in and I will catch you next time.