
Judd Rosenblatt joins to discuss DeepSeek and the competition around AI development.
Loading summary
A
Welcome to 10 Blocks. I'm Jordan McGillis, economics editor of City Journal. On January 20, 2025, the Chinese artificial intelligence firm Deepseek released its R1 model. The model is competitive with top American models, but Deepseak has reportedly achieved this feat at a tiny fraction of the cost that American firms have been pouring into their training. The next day, President Donald Trump held a press conference at the White House with the heads of OpenAI, Oracle, and Japan's SoftBank to announce a $500 billion plan to build a system of AI data centers in America called Stargate. To discuss the latest in AI geopolitics. I've invited Jud Rosenblatt onto today's show. Judd is the founder and CEO of AE Studio and a leading advocate for aggressive, thoughtful American AI development. Judd, thanks for coming on.
B
Thanks for having me.
A
Jordan, first question for you. What does Deep Seek's release tell us about the state of the AI arms race?
B
Well, it's impressive that they were able to make so much algorithmic improvements with limited compute. And one thing that's fairly interesting about the Deep SEQ work is that it. It strongly reinforces this idea of a negative alignment tax, where improving alignment techniques, investing in trying to make AI more likely to be more capable by virtue of its alignment not only mitigates risks, but also enhances capabilities because it uses reinforcement learning to induce chain of thought reasoning and that winds up increasing. So basically optimizes for transparent reasoning structures and also increases model performance in all these different complex tasks, math ones especially. And so basically instead of just using reinforcement learning for preference alignment, which is what OpenAI's RLHF does for politeness and stuff, it uses these reward signals in reinforcement learning that improves the internal structure of thought itself.
A
Can you talk a bit about the unusual origin story that Deepseek has? I understand it was kind of like a hedge fund as it got started.
B
That is right. Yes. It is the passion project of the guy who runs this hedge fund. And he. He's very interested in trying to build artificial superintelligence and had a lot of spare compute that they used at their hedge fund and decided to start building AI.
A
One of my favorite tech thinkers, Kevin Hsu, says that this is really about open source versus closed source in AI. Can you give us some context on that debate and how you look at that juxtaposition?
B
Yeah, it's a fairly complex, nuanced debate. I think it's very important that we try to accelerate American AI development and make sure that America wins. But at the Same time, we want to make sure that AI doesn't pose significant existential threats to America and to humanity and that we don't lose control of it. And open source is actually the best thing there is for AI alignment. The greatest gains from in alignment and also so associated capabilities have come because you've been able to get stuff open source and then share with other people and build upon their advancements. And a lot of the work that, all the work that Deepseek has built on top of is all this open source work that they had access to, as well as distilling stuff from OpenAI's model actually illegally or against terms of use. But the fundamental problem with open source is it is also fairly dangerous. Like there's a. There's a pretty crazy thing about how this all works where Anthropic had a paper called a sleeper agents paper and basically you can put sleeper agents into an open source model and then there is no way to know that they are there and they can get activated anytime. Which means that China could eventually create some open source model. And it doesn't seem to be the case with Deep Seq, but it might be, we don't know. And then everyone in the west could start using it and it could turn out to have a botnet that could take over all infrastructure in the west or something in the future. And there's just no way to know about that. So it's fundamentally extremely risky. Which means that ideally you'd want to have some oversight into open source models. And then there's also just the fact that as things are poised to get more and more powerful algorithmic improvements, Deepseak shows you that you can make all these huge algorithmic improvements. And as that continues, we're likely to see riskier and riskier stuff happen with open source. Like the increased capabilities at the frontier of open source mean that there could be substantial risk and biological weapons creation misuse type of stuff is going to get easier and easier for any random person to do.
A
Can you explain where the different big American AI players fall on the open source closed source debate?
B
Everyone is closed source except for Meta, which is very pro open source. Meta's open source stuff was used in the creation of deepseek. The other big open source advocate is Mistral, in which Marc Andreessen, the American investor, is an investor.
A
How did Deepseek utilize OpenAI as code?
B
Well, they, they did a process of, of distillation where you can get you, you can from the outputs of using OpenAI basically figure out ways to make their thing better. So it's not technically like it's, it's something that they can just go ahead and do. OpenAI is, it's against their, their terms of use, I think, to do it. And they're not happy that it was done. But interestingly at the same time they're not happy that it was done. Sam Altman also recently said that maybe he's in a Reddit ama, that maybe he thinks it was the wrong strategy for OpenAI to be closed source and thinks that maybe he should make OpenAI more open source in the future, which would. Although I think he hinted that he doesn't think that OpenAI employees would be very much in favor of that.
A
One of the big controversies and debates that emerged in the days following the big splash that deepseek made is how American chips possibly played into the training that deepseek was doing. What's going on with that?
B
There, there seems to be a great deal of evidence that exactly that happened. There's also like a lot of evidence people freaked out that it only cost $6 million to create R1. But in fact there is a great deal of evidence that it actually, for, for everything that led up to that, it actually cost 1.6 billion or something in total compute. And that, that was some number like that. And I think they mostly were H800 chips. So the, I think it's like not since export controls haven't been super effective because of a loophole until like end of 2023. And they can still use, they can still use compute in different countries like Singapore, which I think they're doing. So the recent stuff from the Biden administration right before Biden went out, an executive order seeks to close those loopholes and it's probably going to be fairly effective. But I think interestingly, one of the things that's really freaking out people coming away from Deep Seq is that they're worried, okay, well can China just out compete us? Even if we have, even if we have greater compute, can they outcompete us? And luckily the answer to that is probably no. If we have very stringent expert controls and we continue to invest in compute ourselves, they probably can't. Because despite what algorithmic improvements you make, if you have greater compute, you get greater benefit from those algorithmic improvements, improvements in the first place. And also that brings down the cost of things. And then that when you bring down the cost there, it actually brings, it actually increases the demand for things like chips, et cetera.
A
I want to go back to your remark Regarding Singapore, you're saying that Deepseek has its infrastructure there? It's not that it's using Singapore as like a pass through for getting the chips to China.
B
Yeah, I think they are using computer that is based in, in China probably though. I mean I'm, I'm not an expert about this stuff, but I think they're, they, I think they might be using it based there and they're also smuggling. There's also a lot of AI chip smuggling that goes through countries like Singapore and Malaysia and like the UAE and stuff.
A
How about Chinese indigenous chip development? Do you think that that's something that's going to take off or are the export controls that the Netherlands has cooperated with the US on, on the machines to produce these things, are those going to slow down Chinese development?
B
It seems like it will slow down Chinese development and they are definitely pursuing self sufficiency in semiconductor technology, but it's quite difficult. So people seem to be fairly confident that China is likely to remain behind. One sort of like potentially unlikely but very risky thing is that we could potentially people could figure out huge hardware advances with AI just figuring out new things that haven't been invented yet. That's a potential concern, but the consensus tends to be that it's that China has a long way to go to catch up.
A
All right, let's spin over to your work. Tell us about your advocacy and the strategic brief that you sent my way before we hopped on.
B
Sure. Yes. Basically I run a bootstrapped AI product consulting company building AI products for clients and grew that to over 160 people and then started building and selling our own companies and investing the profits into neglected approaches to AI alignment, basically trying to reduce existential risk from AI. Was, was motivated to start doing this after having kids myself and seeing that very few people are actually working on trying to solve the problem that nobody knows what is going on inside of AI models and we're not tracking to really understand that or try tracking to figure out how to make AI actually for sure not going to pose a threat to America and to humanity. And so I've been working in the industry for over a decade and I noticed that since so few people are working on it and it is also getting more and more powerful, it's getting more powerful at an accelerating pace. And that's something that's sort of hard to get through your head. Humans didn't evolve to be able to understand exponentials. There's something called exponential slope blindness, which is that it's just hard for us to model what exponential growth looks like. But we can, we can project that due to compute improvements and to algorithmic improvements and scaling. Compute that over the course of the next five years. It's interesting to ask yourself like, how much more powerful, how much more capable do you think AI models are going to be, say, five years from now? What would be your guess?
A
You're asking me to do some exponential math on the fly. A lot more powerful.
B
Right, but so they're like, they're currently superhuman level at like they're better than doctors at diagnosis and, and stuff, top 200th best programmer in the world and et cetera. They're already there. How much more capable do you think they'll be in five years?
A
10,000 times more capable.
B
That is actually the lower bound of what people are projecting exactly that. The consensus seems to be somewhere between ten thousand and a million times more powerful five years from now.
A
And I can't even conceive of what that means.
B
Yeah, exactly. It's. Yeah, you can't. It's. We have no idea. I don't know what, what like 1.5 times more intelligent than, than humans. It's, it's hard to. It's hard to conceive an obvious.
A
True. There's a lot of upside, a lot of upside there for us. There's a lot of downside too. How are you thinking about the benefits and the risks and, and how you can help socially and governmentally to, to steer things in a way that is aligned well.
B
So I'm naturally quite optimistic and I want to make sure we capitalize on all of the benefits. If it works out, then we solve whatever major problems we have in the world today. We cure all diseases and reverse aging and things like that are quite exciting. But in order to get there we have to actually invest in trying to solve the alignment problem itself. If we solve the alignment problem, we then not only are we more likely to get there because AI doesn't accidentally kill us along the way or get so powerful that some individual disaffected teenager can create a biological weapon and kill a million people instead of just shooting up his high school, but also actually investing in alignment itself yields exactly those sorts of like advanced bio advancements that can do things like cure aging and solve diseases and stuff. There wind up being all sorts of incidental benefits as well. So how. It's, it's not really, it's not really trade off actually. It's just that once the biggest investments in alignment to date have actually advanced AI capabilities in the first place. There is, there's some cool research from Anthropic that confirms that in models above 10 billion parameters, alignment features consistently enhance performance rather than limiting it. And if you make use of that sort of stuff, you can just do more and more capable things over time. There's also this chain of thought reasoning stuff is unlocking all these new capability frontiers. And that's also like paving the path forward for a lot of potential alignment advances where basically like the more you invest in alignment there, the more you're likely to have your company win in the market too, because you get all these other benefits as well.
A
Now we're a public policy research institute. From a governmental perspective, what can be done, what policies can be instituted to help align AI development with humanity's best interests?
B
Well, I think that Manhattan Institute Nick Whitaker playbook for AI policy is excellent actually and makes a lot of really good recommendations of ought to be done to expand US leadership in AI things like protecting AI labs from hacking and espionage. It's currently the case that things that just leak, there's not much security at the labs whatsoever. And so that's like people think that things are just getting stolen and going straight to China right now. And seems like there's some evidence of that with Deep seq. We, we don't want to over regulate and then cause us to, to not be able to. Well, for the people actually incidentally are mostly worried about overregulation, to not lose to China. But the other thing that you don't want to overregulate about is you don't want to overregulate and then therefore thereby not be able to solve the alignment problem because you cripple yourself from making the advancements and capabilities in alignment that would actually make AI be more capable by virtue of its alignment in the first place and then be able to defeat the AI that is less capable and less aligned. And, and so what that means is like there's a lot of unfortunate legislation on a state level like in Texas right now focused on disparate impact stuff, which really would probably be a disaster if that stuff winds up passing.
A
I'm not sure what you mean by that. What's the legislation exactly?
B
It's stuff that's. I'm not an expert on disparate impact law, but basically I think like the quick summary of it is that there's this. They're creating this new thing called the Texas Responsible AI Governance act. And the goal is to try to prevent quote unquote algorithmic discrimination, which means that if you are if you're like a employer or you're a company and you wind up doing things which they say are discriminatory to one group or another, then you get in trouble for that. And it's hard to hold people accountable for that sort of thing. And it just like basically it's impossible to build AI that would be able to be accountable in that way in the first place. And so if we have to, if we have to, and it doesn't make any sense anyway either. So if we have to wind up doing that stuff, then it's going to cripple us compared to countries that don't have to do that stuff. But you ask, so what policy would actually be good? And there is some obvious stuff that would be good and that would be substantially increase investment in AI alignment. That would be the thing that would be the best thing you could possibly do. And I, I'm, I'm not a big fan of, of government involvement with anything generally. So, so on an ideal ideally would be more on a. Encouraging private industry to substantially, to more substantially increase investment in AI alignment, which is better for them anyway. There's a lot of development of data centers on, on federal land poised to happen and already happening. And you could institute some requirements to make sure that companies that are, are benefiting from that would be dedicating a certain percent of that compute to investments in the AI limit, which is good for them anyway. You could also just spin up substantial investments in AI alignment in the first place, like DARPA could allocate eventually hundreds of millions or billions directly into the AI R&D that is needed here. It's interesting because investments in alignment might seem on the surface to be less likely to yield economic benefit because they entail more AI, R&D. R and D is fundamentally a risky unknown process and you don't know whether a particular direction in R and D is going to work or not. But the cool thing is the stuff that does work out yields orders of magnitude more economic value than the less ambitious stuff. And so causing companies to and doing public private partnerships and stuff to invest in neglected or purchased AI alignment would be fairly high impact. I mean fundamentally there's this unsolved problem with AI which is nobody knows how it works and we know it's about to get way more powerful and we don't know what's going on inside of it. And so it might do what we want or it might do something completely different from what we want. We think the optimization function is do X Tell it to do X But actually because we can't look inside of it and know what's going on, it might be that it's optimizing to do X until it gets more powerful than us and then just kills us. Like it sounds crazy to say, but we have no idea because we can't look inside and see what's actually going on. And, and, and so, so given that it's sort of like this unsolved scientific problem, but the funny thing is, very little has been invested in trying to solve the problem in the first place.
A
And how would a firm demonstrate credibly that that's what they're trying to solve to the adjudicating governmental officials who had managed the purse?
B
That is a great question. Probably worth further reflecting on that and making sure that there aren't unintended consequences of, of regulation preventing the innovation and alignment necessary there. It's a great question.
A
My next big question pertains to the emerging, I would argue, bipolar world we're entering. How are China's AI thinkers approaching the alignment question, if at all?
B
So like the, the only Turing Award winner in China who, according to various reports she outsources, is thinking about this stuff too, is extremely worried about existential risk from AI. He, he's like the, the, the top AI guy in China, it seems, and he's quite worried about this stuff. Supposedly Henry Kissinger is the final thing he did before he died, went to China and tried to make Xi an AI doomer. And there are various different reports that, that, that may have worked, but it also does not seem that it's particularly top of mind for him right now. I mean, that may have changed, but it might be like priority number 20 or 100 or something. It may have changed after deep seek, but it's, it's hard to know. It's hard to trust anything coming out of China. But in an, ideally, reports are accurate and Xi is in fact, quote unquote, an AI doomer now and very worried of. It would make sense that he doesn't want to lose control of an authoritarian state. And that is something that ideally Donald Trump would be able to leverage and sort of bully Xi on his AI doomerism stuff and cause China to slow down or to get on board with whatever paradigm we want to do. I mean, there is essentially effectively a war going on between the US and China independent of all this stuff in the first place. And we want to make sure that we win that. But that doesn't mean that we can't also simultaneously leverage the reality on the ground with China and Xi and get him to be like, rightfully scared about loss of control to AI and then slow down and, and like follow American lead on this in the first place.
A
Is there any way that you see for the American government to integrate AI policy across the various agencies and branches of government?
B
I'm not really an expert on that. No, probably not. Not sure. I mean, like, there is a. There. It seems like we're going to have a strong executive branch in the next four years and probably a lot of.
A
There'S a lot of vigor.
B
That is true, yes. Probably a lot of direction is going to be set there. From meeting with various people In Congress in D.C. recently, it seems like Republicans are actually themselves on an individual level. Lots of congressmen, senators, et cetera, are extremely worried about existential risk from AI and they'd like to pass legislation about that. I don't want them to overregulate and do things that don't, that, that don't actually help. But they, they, they would like to do something. They don't know what the solution is. They'd like to do something and they're waiting right now to see what the solution is, what's going to be put forward by the Trump administration. And then most likely, it seems like Republicans will, will get in line and support that and we'll, we'll, we'll see. I mean, you, you know, you. Trump is on record being very worried about, quote, unquote, super duper AI. His daughter, actually. She tweeted about Leopold Aschenbrenner's situational awareness. He's a whistleblower who got fired by OpenAI. Whistleblower, ish, got fired by OpenAI and wrote a great long piece about AI. That was something that she liked so much that she actually, she tweeted about it and I think she even made a website herself, it seems like, to educate people about AI, AI and existential risk from AI and stuff like that. And so hopefully that winds up moving forward and having Trump exert strong leadership on this. I think there's a strong history of good conservative leadership on difficult problems like this. There's a guy named Herman Kahn who, in the 50s, I think, wrote a book called Thinking the Unthinkable, which was thinking about nuclear, the possibility of nuclear catastrophe and thinking about that when nobody really wanted to think about it. It's a scary thing to think about things like loss of control to AI and even diplomacy with AI and stuff like that. But the reality is this stuff is accelerating and we need to consider all sorts of different possible things that may come about and then make sure that America is in the best possible place given those possible realities. And that's the guy that Dr. Strangelove was based on. And we sort of need that right now. There needs to be realistic people thinking deeply about how this technology is going to change everything and then what the implications are and how America can win and American citizens can continue to endure and thrive and flourish in the future. And if we get this right, then as you mentioned, there's enormous progress and we solve everything going forward.
A
Beautiful. That would be last question for you, Judd. Where can listeners find your work website?
B
AE Studio is our website. Posted various different think pieces on Less Wrong, which is where a lot of the people working on AI alignment hang out. So you could look me up on Less Wrong or Twitter or something like that, I guess.
A
All right. Judd Rosenblatt, thank you so much.
B
Thanks for having me.
A
Please do check out what Judd and his team are up to and as always, like, comment and subscribe to 10 blocks from City Journal. Thanks for listening. Thanks for joining us for the weekly 10 Blocks podcast featuring urban policy and cultural commentary with City Journal editors, contributors and special guests.
Host: Jordan McGillis (Economics Editor, City Journal)
Guest: Judd Rosenblatt (Founder/CEO, AE Studio)
Date: February 6, 2025
In this episode, Jordan McGillis welcomes Judd Rosenblatt to discuss the accelerating global race in artificial intelligence, spurred by China’s Deepseek AI firm releasing its competitive R1 model and the U.S. response with the massive "Stargate" initiative. The conversation centers on the strategic importance of AI, the open versus closed source debate, technological and geopolitical risks, and the urgent need for better alignment of AI with human values and U.S. interests.
"Improving alignment techniques... not only mitigates risks, but also enhances capabilities...it uses reinforcement learning to induce chain of thought reasoning." – Judd Rosenblatt ([01:23])
"You can put sleeper agents into an open source model and then there is no way to know that they are there and they can get activated anytime." – Rosenblatt ([03:40])
"If we have very stringent export controls and we continue to invest in compute ourselves, they probably can't [outcompete us]." – Rosenblatt ([07:40])
"Humans didn't evolve to be able to understand exponentials. There's something called exponential slope blindness." – Rosenblatt ([10:32])
"Investing in alignment itself yields... advancements that can do things like cure aging and solve diseases..." ([12:16])
Alignment investment not only addresses existential risk but fosters new capabilities and economic value.
"Substantially increase investment in AI alignment. That would be the thing that would be the best thing you could possibly do." – Rosenblatt ([16:32])
"There needs to be realistic people thinking deeply about how this technology is going to change everything..." – Rosenblatt ([23:59])
On the Danger of Open Source:
"China could eventually create some open source model…and it could turn out to have a botnet that could take over all infrastructure in the west." – Rosenblatt ([04:12])
On Exponential AI Development:
"The consensus seems to be somewhere between ten thousand and a million times more powerful five years from now." – Rosenblatt ([11:36])
On Alignment Investment Being a Win-Win:
"The biggest investments in alignment to date have actually advanced AI capabilities in the first place." – Rosenblatt ([13:34])
On Policy Balance:
"You don't want to overregulate and then therefore thereby not be able to solve the alignment problem because you cripple yourself." – Rosenblatt ([14:50])
On China and AI Doomerism:
"Supposedly Henry Kissinger is…tried to make Xi an AI doomer." – Rosenblatt ([20:06])
On U.S. Leadership:
"There’s a strong history of good conservative leadership on difficult problems like this." – Rosenblatt ([23:16])
This episode provides a detailed, nuanced exploration of the global AI arms race, emphasizing the twin imperatives of American innovation and effective risk management. Rosenblatt calls for significant investment in AI alignment, both for safety and competitive advantage, while cautioning against regulatory overreach that could stifle progress. The conversation illustrates both the strategic stakes and the complexity confronting policymakers as AI rapidly evolves.