Loading summary
A
OpenAI has just released a new, quote, unquote, blueprint of what they would like to see in AI regulation. This is coming at a very interesting time where we have a Biden administration in the United States that is on its way out and a Trump administration that is on its way in. So there's kind of like this political shifting of wins, and it seems like this is what OpenAI is essentially going for. This is the opportune time and they're laying out exactly what they would like to see happen now. It's kind of interesting because you can see a little bit of, a little bit of politics at play where they're trying to, you know, give Biden his flowers, but also butter up Trump and his administration on certain issues. This is very, very fascinating and I think it says a lot about what we can expect to see in AI regulation, but also what the AI companies would like. So we're going to dive into all of this. But first I wanted to mention if you have ever wanted to grow and scale your company or or start a brand new online side hustle using AI tools, you need to join the AI Hustle school community. We have over 300 members and the last month we had this at a hundred dollars a month, we've dropped the price for January. For New year's it is $19 a month to join. Every single week I record exclusive Deep Dive videos on tools, tactics and strategies that I'm using right now to make money with AI and to grow and scale my current businesses, projects and everything that I'm working on, including this podcast. I share really cool tools. We have an entire classroom with a bunch of different sections. So we have AI marketing growth hacks, different Amazon influencer side hustles, how to become an AI consultant for any industry, how to you how to create AI music and make money from it, and different hacks for growing podcasts and for creating content. There's a ton, there's dozens of videos in here on all of these different categories and things that are really great deep dives that I don't publish anywhere else. So it's amazing content. If you're interested in that, you can check it out. The link is in the description. Again, a community of over 300 members that we all chat and share exactly what we're working on. We'd love to hear from you and we'd love to get your input on what you think about different people's projects and stuff. So that's interesting. Check out the link in the description. Let's get on to what OpenAI is doing doing. So what was really interesting to me, OpenAI, they published this in kind of like a blog post. They're calling this their, quote, unquote, economic blueprint. So this is a living document, meaning it's going to get changed and updated regularly. And it's all of the policies that OpenAI thinks the US government should be building on. So this includes a forward, which was written by Chris lane, which is OpenAI's VP of Global Affairs. And essentially in this, they assert that the US needs to act to attract billions in funding for chips, data, energy and talent, which is what they're saying is necessary to, quote, win on AI. So the whole, I think the whole document here comes through has a handful of different parts. Number one is chips, data and energy. And what you'll notice about chips, data and energy is this is infrastructure. So really what they're focusing on is what they would like the US Government to help with in regards to the infrastructure needed to grow AI. Now, this is coming at a very interesting time because they've just released their O1 model, which we've learned is 20 times more, uses 20 times more compute than GPT4O. And because of this, they need more energy. They need more compute. And if we want to scale up AI, they're like, look, we want it to be, we want to use a hundred times or a thousand times more compute and energy, and then this AI model gets, you know, exponentially better. But in order for this to happen, we really need to build up more infrastructure, we need more energy, we need a lot of. We need a lot of resources. And so it seems like this is kind of what they're getting at with this document. So they said, quote, today, while some countries sideline AI and its economic potential, the US Government can pave the road for its AI industry to continue the country's global leadership and innovation while protecting national security. So this is good. They're not like the US Government's been failing. They're just like, they're like, look, some, some people are failing, but you guys can be the leader. So it's kind of like sets the stage to make the government the hero. And I think that's probably a good strategy at this point. So repeatedly they have called on the government to make some big changes, and they're, they're saying that this is because it's very difficult. The current AI regulation, environment today in the United States, in 2024 alone, just between all of the different local states, there was introduced about 700 AI related bills. So some of these conflict with each other. Texas has one called Responsible AI Governance act, and there's a whole bunch of things that they don't like in that, I think. So. OpenAI's CEO, of course, Sam Altman, he also criticized existing federal laws, including the CHIPS act, which is like a pretty popular Democratic kind of supported during the Biden administration bill, which I think it's got bipartisan support in essentially that it's giving kind of handouts to companies like Taiwan Semiconductor Co. For coming to the United States and building infrastructure, which is critical because if China ever goes and takes over Taiwan, we are doomed. And, you know, 90% of the most advanced chips are created there. So the CHIPS act is giving them money and they're now building, you know, fabs here in the United States, specifically in Arizona, near where I live. So that's pretty cool. In a recent interview with Bloomberg, Sam Alton was talking about all of this and kind of roasting the CHIPS act, though, so. So, yeah, all of that to say the CHIPS act does do some good things, but there's room for criticism. He said that it has not been as effective as any of us hoped and that he thinks there is, quote, a real opportunity, referring to the Trump administration, quote, to do something much better. As a follow on. I think this is cool because I think he has the right strategy. You can tell Sam Altman is someone that is, if nothing else, very strategic in his business moves. And so he knows Biden's on his way out. Biden can't do much. Biden can't do anything for him. Republicans are coming into power in the United States with control of the Senate and the House. So the only strategy at this point, and you're seeing this from a lot of big business players, is like, butter up the Trump administration. Because if you make an enemy there, you're. You're kind of toast. You're not getting anything out of the Biden administration anymore. So he's like saying he kind of criticizes the predecessor's bill. And then it's like there's something, there's, you know, opportunity for you to do something much better. Much better for who? For him, for, you know, whatever. But it doesn't matter. He. I think he's setting the stage to try to get Trump and his administration to come in and, and support his vision. So this is what he said. He said, quote, the thing I deeply agree with Trump on is how it is wild how difficult it has become to build things in the United States. Power plants, data centers, and any of that kind of stuff. I understand how bureaucratic craft builds up, but it's not helpful to the country in general. In particular, it's particularly not helpful when you think about what needs to happen for the US to lead AI and the US really needs to lead AI. Okay, so right there he's like, look, I agree with Trump on this thing. Even though we know Sam Altman traditionally has been more left leaning. He's trying to build common ground and some bridges because obviously he, he doesn't want his company to negatively get impacted from this incoming administration. Okay. The big thing they focused on, like I mentioned, is all of this, all of this kind of infrastructure. They've talked a lot about nuclear power and we're. This is coming at a time when, to be fair, both Meta and aws, big tech giants, have run into issues when trying to scale up nuclear efforts for, for their data centers. So Meta or Microsoft bought like Three Mile island and is trying to, or not really, they didn't buy them. So Meta is helping. Three Mile Islands, there's a nuclear reactor right next to it that got decommissioned. They're helping to get recommissioned and bring it back online. AWS is trying to do some stuff with, with, with in regards to nuclear. And Meta, interestingly enough, ran into some issues when I believe, if I'm not misquoting this, they found a rare bee species on the site that they wanted to build a nuclear reactor to help power their data centers. And so it got put on hold. And Meta's kind of annoyed about this. So there's a lot of like, you could call it red tape. Right, right. Like a rare bee species where they're trying to build something, they're trying to essentially get stuff built faster. So near term, OpenAI's blueprint is proposing that the government develop best practices for model deployment to help streamline that. They're also hoping that it's not going to be limiting their exports to or they're hoping that it will be limiting the exports of their, essentially their AI to adversary nations. So you can imagine China, in addition to all of this, the whole blueprint that they have is essentially encouraging the government to share national security related information, briefings on threats to the AI industry with vendors. So they're like, hey, look, we want the inside scoop of what's going on in AI in the industry, maybe the national security issues. Share it with the private sector. So I think that's kind of interesting. They said, quote, the federal government's approach to frontier model safety and Security should streamline requirements responsibly. Exporting models to our allies and partners will help them stand up on their own AI ecosystems, including their own developer communities, innovating with AI and distributing its benefits while also building AI in the US Technology, not tech. AI on US technology, not technology funded by the Chinese Communist Party. So specifically calling out the CCP over in China as kind of the adversary when it comes to this. Now, I think this is pretty bipartisan in the United States, or at least I hope it is. We see there's all sorts of new models coming out of China recently. There's a fantastic model that came out of China called Deepseek. Very fast trained, it's open source, you can run it locally on your computer. But famously, if you ask it for any criticism of the leader of the Chinese Communist Party, Xi Jinping, it will say, sorry, I can't answer that. And if you ask it about Tiananmen Square, it will deny that it ever happened. So obviously China is putting its Internet censorship into these models and this is kind of, you know, cause for concern if these become widely adopted in the United states. So the OpenAI already has a bunch of partners in the US government and so I think it's trying to grow right now. OpenAI has a deal with the Pentagon for cybersecurity work, a bunch of other stuff. It's also teamed up with Anduril to supply its AI technology to systems that the US military is using to essentially counter drone attacks. So it is working with the government, it is working with the military, but it looks like it is trying to expand that. They said, quote, the government can create a defined voluntary pathway for companies that develop AI to work with governments to refine model evaluation test models and exchange information to support the company's safeguards. So this is a really, really interesting time to kind of see what's going out, what's, what's going on. They also said, quote, other actors, including developers in other countries, make no effort to respect or engage with the owners of IP rights. Okay, so this is interesting. They're talking specifically about copyrighted material. And this is maybe one of the most interesting things in all of this. They don't want to get sued for using copyrighted material. They want it to just become more accessible. And so in, in regards to this, they're pointing at other countries like China, that this doesn't matter. China will grab any data, they don't care about copyright. And so they say, quote, if the US and like minded nations don't address this imbalance imbalance, meaning China being able to use copyright and not them through sensible measures that help advance AI for the long term. The same content will still be used for AI training elsewhere, but for the benefit of other economies. The government should ensure that AI has the ability to learn from universal, publicly available information, just like humans do, while also protecting creators from unauthorized digital replicas. This is really interesting. They're essentially using the example of China stealing everybody's copyrighted data and not caring about it as justification for them to do the same thing. So this is interesting. I know there's two sides of the debate, but very, very interesting. One thing that I do think is important is to know what OpenAI has been doing in relation to the government. So in the first half of last year, they tripled how much money they were spending on lobbying. They spent $800,000 versus $260,000 for all of 2023. Obviously, as they're becoming a bigger player, they're spending more money. I mean, that's a million bucks in the first half of last year. And this is going to grow. So the company also brought former government leaders into their executives. They have a bunch of ex Defense Department officials, NSA chiefs, and they also have formerly the chief economist at the Commerce Department under Joe Biden. So they're bringing in a bunch of government officials. Some people are concerned about that. They just called someone from BlackRock onto their board. They have people from the CIA working inside of them. So all sorts of people are concerned about that. But it seems like this might be what they're doing, you know, what they have to do to play the game, as it were. And that's controversial. I'm not saying whether that's good or bad seems to be what they're doing. In addition, they're also throwing their weight behind some Senate bills that would establish a federal rulemaking body for AI and provide federal scholarships for AI research and development. They've also opposed bills, in particular California's SB 1047. They were arguing at that time that it was going to slow down AI's innovation and push out talent. Fascinating things happening with OpenAI and the government. And I will keep you up to date on on all of it. If you enjoyed the episode today, if you enjoyed the podcast, the number one thing I would appreciate is a review on the podcast. It helps me find amazing guests, helps me cover amazing stories, and motivates me to keep cranking out all this awesome content and sharing everything I'm learning. So if that sounds, if this has been interesting to you, if you could leave a review I would really appreciate. Also, make sure to check out the AI Hustle school community if you're interested in growing and scaling a business using AI tools. Thanks so much for tuning in and I will catch you next time.
Podcast Summary: Joe Rogan Experience for AI
Episode: OpenAI’s Economic Blueprint for Navigating AI Regulation
Release Date: April 21, 2025
In this episode of the Joe Rogan Experience for AI, the host delves into OpenAI’s newly released economic blueprint aimed at navigating the complex landscape of AI regulation. Released during a politically tumultuous period in the United States, where the Biden administration is concluding its term and the Trump administration is poised to take over, OpenAI strategically positions itself to influence forthcoming AI policies. This summary encapsulates the key discussions, insights, and conclusions drawn from the episode.
Release Context and Strategic Timing OpenAI has unveiled a "living document" titled the Economic Blueprint for Navigating AI Regulation. This blueprint outlines OpenAI’s vision for future AI policies and serves as a guide for the U.S. government to foster AI growth while maintaining national security. The timing coincides with a significant political shift, allowing OpenAI to leverage opportunities under the incoming Trump administration.
Notable Quote:
"OpenAI is essentially going for this opportune time and they're laying out exactly what they would like to see happen now."
— Host [00:00]
OpenAI emphasizes the critical need for robust infrastructure to support AI advancements. The blueprint highlights:
Notable Quote:
"The US needs to act to attract billions in funding for chips, data, energy, and talent, which is necessary to win on AI."
— Chris Lane, VP of Global Affairs, [02:30]
OpenAI advocates for streamlined AI regulations and best practices to facilitate model deployment without stifling innovation. They propose:
Notable Quote:
"The federal government's approach to frontier model safety and security should streamline requirements responsibly."
— OpenAI, [15:45]
OpenAI raises concerns about international disparities in data usage, particularly criticizing China's disregard for copyright laws. They call for:
Notable Quote:
"The government should ensure that AI has the ability to learn from universal, publicly available information while also protecting creators from unauthorized digital replicas."
— OpenAI, [22:10]
Engaging with Incoming Administration With the Republican Party gaining control of the Senate and the House, OpenAI is strategically positioning itself to gain favor with the Trump administration. This involves:
Notable Quote:
"The thing I deeply agree with Trump on is how it is wild how difficult it has become to build things in the United States."
— Sam Altman, [12:50]
Increased Lobbying Efforts OpenAI has significantly ramped up its lobbying expenditures, tripling their spending in the first half of the previous year to influence AI-related legislation effectively.
Notable Quote:
"In the first half of last year, they tripled how much money they were spending on lobbying—$800,000 versus $260,000 for all of 2023."
— Host, [25:30]
OpenAI is expanding its partnerships with various government entities to solidify its role in national AI strategy:
Notable Quote:
"They have a bunch of ex Defense Department officials, NSA chiefs, and formerly the chief economist at the Commerce Department."
— Host, [30:15]
Regulatory Conflicts and Legislative Overload The U.S. faces a fragmented regulatory environment with approximately 700 AI-related bills introduced in 2024 alone. This creates conflicting regulations across states, hindering cohesive AI policy development.
Opposition to Restrictive Legislation OpenAI has opposed certain bills, such as California's SB 1047, arguing that overly restrictive regulations could stifle innovation and drive talent away from the U.S. AI sector.
Notable Quote:
"They have opposed bills, particularly California's SB 1047, arguing it was going to slow down AI's innovation and push out talent."
— Host, [35:00]
China’s AI Advancements OpenAI acknowledges the rapid development of AI models in China, such as Deepseek, which are capable of running locally but are heavily censored to conform to state propaganda. This underscores the need for the U.S. to maintain its AI leadership to counterbalance China’s technological influence.
Notable Quote:
"If you ask it for any criticism of the leader of the Chinese Communist Party, Xi Jinping, it will say, sorry, I can't answer that."
— Host, [40:20]
Intellectual Property Concerns OpenAI warns against the misuse of copyrighted material by adversarial nations, advocating for U.S.-led measures to protect intellectual property in AI training processes.
Notable Quote:
"If the US and like-minded nations don't address this imbalance, the same content will still be used for AI training elsewhere, but for the benefit of other economies."
— Host, [22:50]
OpenAI’s economic blueprint represents a strategic effort to shape the future of AI regulation in the United States amidst shifting political landscapes. By advocating for enhanced infrastructure, streamlined regulations, and robust intellectual property protections, OpenAI aims to secure a leadership position in the global AI race. Their increased lobbying efforts and strategic partnerships with government entities underscore their commitment to influencing policy and ensuring favorable conditions for AI development. As AI continues to evolve, OpenAI’s blueprint and its implementation will play a pivotal role in defining the technological and economic landscape of the future.
Notable Quote:
"It seems like this is kind of what they're getting at with this document—it says a lot about what we can expect to see in AI regulation, but also what the AI companies would like."
— Host, [05:10]
This episode offers a comprehensive analysis of OpenAI’s strategic maneuvers in the realm of AI regulation, highlighting the intricate balance between innovation, regulation, and geopolitical considerations shaping the future of artificial intelligence.