
Loading summary
A
Hello, and welcome to a free preview of Sharp Tech. Hello, and welcome back to another episode of Sharp Tech. I'm Andrew Sharp, and on the other line, Ben Thompson. Ben, happy New Year. How you doing?
B
I mean, I'm struggling a bit, to be totally honest. Since, you know, the first podcast back, we're starting, what, like, 20 minutes late? Yeah, it's just been a real struggle. But, you know, whenever I think that things are difficult, I'm having a hard time working through it. All I have to think about is you.
A
Yeah, it could be worse.
B
Your basketball team just acquired Trey Young. Congratulations. I have a happy Trae Young day.
A
Oh, man. I'm not going to lie. I'm down pretty bad over all this. The Wizards just traded for one of my least favorite players in the entire NBA. You were mocking me with AI Photoshops, Photoshopping Trey Young and his thinning hair onto my head last night.
B
I know. The problem was I did it the first time. I said, put the hair. The hair was too, from this picture onto you. And I'm like, yeah, that. That. That is a full head of hair. That's not the Trey Young that I know.
A
Interesting enough. Yes. That's what I have to look forward to over the next several years, because God knows the Wizards are going to give him an ext and he's just going to be here in our nation's capital until the end of the decade. So, yeah, it's been a tough 48 hours. I was not prepared for the possibility that the Wizards were going to trade for Trey Young, but here we are. I'm thrilled to be recording a tech podcast where I can escape all of that. I have to record Greatest of All Talk later tonight, but we could go for hours on Sharp Tech. Just for my own personal sanity, I'm
B
not sure I'm prepared to do that, but we'll see what we can do.
A
Yes, well, we'll begin with a post from. From the holiday break that was headlined capital in the 22nd century. Philip Tramell and Dkes Patel write in part. If AI is used to lock in a more stable world, or at least one in which ancestors can more fully control the wealth they leave to their descendants, let alone one in which they never die. The clock resetting shocks could disappear assuming the rich do not become unprecedentedly philanthropic. A global and highly progressive tax on capital, or at least capital income, will then indeed be essentially the only way to prevent inequality from growing extreme without one. Once AI renders capital a true substitute for labor, approximately everything will eventually belong to Those who are wealthiest, when that transition occurs, when or their heirs, or more precisely, it will belong to the subset of this group who save most, and most invest, with a view to maximizing long run returns. So, Ben, we're going to enter the dorm room here to kick things off for 2026. Imagine the economy in the 22nd century. You wrote about all this in your article on Monday. I enjoyed that piece. But before we get to the substance, I'm curious what prompted you to write that now as we come back from the holidays?
B
Well, first off, this piece was sort of ricocheting around at least my corner of Twitter, I think over the holiday break. Kind of a note to self drop these speculative pieces. Between Christmas and New Year, no one
A
else is doing anything better to do
B
than to talk about this. The beast and the perspective kind of annoyed me. I was getting worked up in some group chats, so I'm like, okay, fine, I'll write about this on Monday when I come come back. So that was sort of the chief motivator. There's a bit where, yes, this is kind of a dorm room speculation piece. Like what happens when AI does literally everything? And it's not AI that we're necessarily controlling. It's like, oh, well, physical world has marginal costs. Well, if the AI decides to go out and get the natural resources and no one has to be involved. And like, at what point is there not even really marginal cost in any sort of meaningful way? Because the AI is building the AI that is building the robots who are building the other robots and all that sort of thing and can do literally everything, you know, that, that humans can do, but, but better.
A
So build the plane, fly the plane, everything at that point, build the excavator,
B
go find, you know, survey the planet or the, or the galaxy to, to find the minerals and whatever it might again. No, exactly. There's a bit like, what, what are we even talking about here? The reason. So whatever, just let it fly. It's not a big deal, but at the same time, implicit, I'd say there's three things. Number one, sort of in this paragraph you read, there's this push that actually we need to address this now because the transition is happening quickly and so we need to lock in the sort of form of the world that we want when this arrives.
A
Right?
B
So I think that there isn't at least an agitation to make significant changes to the way we think about a lot of this stuff.
A
And I will say that impulse does make some sense to me. If you forecast seismic change that society is just not equipped to handle. I understand why, in broad strokes, you would say let's put down some guardrails to prevent things from spiraling out of control 10 or 15 years from now.
B
Right. I mean, that's a whole nother discussion that we may have about how productive is it actually to try to address things ahead of time. There is an awful lot of sort of arrogance and assumptions about one's knowledge of the future, one's ability to even fill with unknowns if you want to.
A
Right.
B
And what do you regulate?
A
What guardrails do you put down when the technology that you're trying to control is effectively in its infancy would be the argument against doing something like that.
B
And sort of, sort of too. Like, there are actual stuff happening now in this vein. Like you have this potential wealth tax on billionaires in California, which, you know, Patel on Twitter is trying to like, separate his article from that. But it's like you don't. This is something I learned a long time ago. You don't get to draw these subtleties like, like when it comes to politics, everything is a blunt instrument. And if you're writing an article endorsing a long term tax on capital, you are, sorry, in the same boat as the people wanting to impose that fight, even if you disagree with that specifically. Right. And again, this is a lesson, I think, that comes with age and experience, but it is just sort of the reality of the matter, more broadly, the sentiment. This is a very San Francisco type of piece and there is this mindset, I think, amongst AI people. And it goes back to lots of questions we've talked about, like, why did OpenAI start as a nonprofit? And like, and quite like the doomer sort of cults and effective altruists and all these pieces that there's almost a religious aspect to it about what AI is, what it's going to be, that it's this weird paradox where you have all these people working very hard to build this. And there's a bit where that almost religious aspect about it is super useful because it's a motivator to build this. But it's married to this viewpoint that I think is unduly pessimistic and will lead to bad decisions and bad outcomes if it's allowed to sort of proliferate. And so I just feel like someone needs to come out and write a different point of view that like at least raising questions. Do we actually know and understand the nature of humanity and human nature? And how does that matter in the context of AI So it's like, is this an optimistic piece? It is. It is an optimistic. In certain senses. I think a lot of my AI perspective is a combination of I'm choosing to be optimistic and accepting that the alternative, I think, is not this. I think the alternative is actually much worse.
A
In what respects? What's the alternative that's much worse?
B
Well, to take this piece specifically, there's this idea that again, AI becomes so good that it's basically a zero marginal cost capability for not just digital goods, which we've talked a ton about, zero marginal cost in terms of digital, but also physical goods. So, like, and again, like, where do you get the AI to give you a massage or be a physio, for example? Right.
A
Yeah.
B
To take something that, like, seems like a very sort of human sort of sort of endeavor. And. Well, because the, again, the AI figures out how to make better AI, the better AI figures out how to make better robots, it uses those robots to make even better robots.
A
Right. So just to unpack this, because I'm new to the dorm room and I don't spend very much time in San Francisco talking about what the world's going to look like in the 22nd century.
B
But that's the takeoff. It says it in this paragraph, like when the takeoff or the transition happens, that's the transition they're talking about, where humans are out of the loop.
A
It becomes zero marginal cost because you have a robot that can go get the resources you need and put it all together. Put if you're talking about a product, for instance, it goes and gets the materials, it assembles the materials, it walks
B
the material, it does the exploration, it does the mining, it does the transportation. Yeah. Humans are just totally out of the loop. At the end of the day, this is kind of like our Netflix discussion, right? What is this one scarce resource?
A
It's time and attention, right?
B
And this is the removal of time. Time and attention is a human concept, a capability that's limited by getting someone to sort of do something. This is like, what if that's totally removed? And that's sort of what this is referring to. And my core sort of pushback, even beyond all the human stuff and whatnot is throw me in with the doomers. Right? Like, if the AI is actually this good to the extent you don't need a human in the loop for literally anything, why do we have a human? How is it that we're keeping control of this AI? Right.
A
That is a fair question. If we get to that point 150 years from now, we're probably screwed in that scenario regardless.
B
Right. And in this scenario somehow we still have like the property laws and the governance and the taxation rules from 2025. Somehow those all stay the same. But, and so we gotta like pass the right law like now or start thinking about it for this fantastical future. Like it's just like there's a weird set of assumptions that are carried forward while everything else is so drastically changing we can barely imagine what that world might be. And, and if that, if that world arrives, I think everything is going to change. So like let's, let's relax a little bit here.
A
Pass new laws because the governments as we know them will not exist in that.
B
No, the AIs will pass their own laws because that's what, that's what they'll like. I just hope they're, that's my thing.
A
Like, you know.
B
Yeah, like, yeah, like I could buy the doomer point of view. I just don't. This, this is like a weird middle ground. It's like this isn't going to happen. There's, there's just some weird, weird thoughts here. Weird assumptions.
A
Yes, fair enough. And what I enjoyed about your piece on Monday, it was, it was an optimistic way to start the year is you wrote about, the headline was AI and the Human Condition. And you wrote about how the vision put forth by Dark, Dark Crash or Darkesh doesn't necessarily account for the good aspects of the human condition.
B
This is the part that. This is where I was getting worked up on the group chats.
A
Yeah.
B
So in this world where the AIs do everything right, like everyone's going to have everything that they want because again, it's sort of zero marginal cost if like you're a rich person in this weird scenario where we still have normal property laws and all these sorts of pieces and people own the AI and control the AI, why would you not have all the AIs you control? Give everyone everything of the people who
A
might, otherwise it doesn't cost you anything.
B
Right? No, exactly right. So. So in this world, life sounds pretty grand. Everything's taken care of. So why, why would we care about inequality? Because everyone has everything. Just because on paper this everything's owned by Elon Musk or whatever it might be or his 47 gazillion heirs. Yeah, the. Which by the way, there's a long history of like apparently like irrevocable trusts are going to guarantee that like heirs don't waste all the money or do again, weird holdover. This particular legal aspect is gonna last for until that time bears are pretty
A
good at restoring equality, historically speaking.
B
Right. So in this sort of. But no, the AI is going to run the trust, so it's going to be fine anyhow. We worry about inequality in this world. There's no materialistic reason to be worried about inequality because everyone has everything. Right?
A
But we worry because in that world would still be a driver of angst and frustration among the.
B
Put a pin in that. I want to come back. I want to come back to that. I want to go back to that. So in this world, the reason to worry about inequality is just because humans don't like it. They don't like knowing that someone else has way more than they do. And I think this is one of the challenges of technology today. Yeah, we. The whole Instagram feed, this even goes back to tv. You get visions of a world that's not yours. And it's not like you're not comparing yourself to, what's the phrase? The Jones down the street or whatever it is. Right? Like Keeping up with the. Keeping up with the Jones. Is it Jones?
A
Keeping up with the Jones is
B
right. Like now it's Keeping up with the Kardashians. Right? Or sort of whatever.
A
I will say the point on Instagram resonates. Resonated because as a Formula one fan, I follow a lot of the drivers and a lot of the wags, and I have access through Instagram to a life full of private jets and unbelievably luxurious vacation spots throughout Europe. And I consume those posts. And I do feel poorer than I actually am when I'm sitting there understanding what an F1 life looks like. And so, yeah, we have more access to the ultra wealthy these days. And it makes everybody pretty frustrated, right,
B
When a huge number of people pretending to be ultra wealthy.
A
That's true, too. It's important to keep that in mind as you're consuming Instagram content.
B
Right? So. But so implicit in this that in a world of total abundance, we still have to worry about inequality. One of the assumptions being carried forward is a negative view of human nature, that human nature is envious that it's not satisfied with what it has, even if on an objective level, what it has is pretty remarkable. And you see this again. And by the way, I think this is a valid. I think this is a truth, right? People today, life's amazing, right? Like even from a. The consumer surplus generated by technology. In particular, access to everything anywhere, all the world's information. We watch any game. I could watch all your Washington Wizards with Trey Young if I Wanted to. Which I will not want to. I have the same phone as the richest person in the world. Because that's the nature of technology. It brings everything up. Right. And yet people feel unhappier than ever. Right. About their relative economic position. They're elected communists in New York City because it's just not fair that people get to have more than I do or whatever it might be. And so you have this. That human condition is a real thing.
A
It's driving politics all over the world. It's not just New York City right now. Yeah.
B
The question though is why do you only carry forward the worst part of humanity?
A
The best part? There's good parts historically.
B
What about the best parts? What about the good parts?
A
Figure it out. Yeah. So tell me more about the good parts of humanity.
B
Well, the good parts of humanity are number one. Humans, I think, like other humans. Right. Like, I mean we have a friend of ours that is very passionate about this in terms of music. AI music. Very impressive. Zero desire to listen. It's like, you know, there's the. The analogy I make is I like a good Japanese whiskey. Yamazaki is. I was into that before it got super popular and sold out everywhere. So I'm bitter that it became very popular. Much harder to now it's like all blends. You can't like get a proper. The. The 12 year. Because they had so much demand, they started blending stuff really out. Long standing.
A
Yeah. Great Japanese whiskey grievance there.
B
Well, Japanese whiskeys are incredible. They're also not the best in the world in my estimation. And the reason is I think a 12 year Yamazaki is basically perfect. And there is an aspect of the perfection that means it's not a 10 out of 10. Yes, there is. There's a bit where like you have, you know, an incredible scotch and there's just something interesting about it that's not. It doesn't like follow. You follow the formula. It's not quite right. Right. And that is part of what makes it appealing. It's part of why maybe my favorite whiskey, someone else actually doesn't like it because there's like. No, everyone who likes whiskey likes Yamazaki. Okay. It's. It's a perfect whiskey.
A
Yeah.
B
But the fact that it's perfect is also why it's not the best optimized.
A
Yeah.
B
And. And I, I do think there. This is just sort of a. There's a flattening of this perspective of the AI is going to make everything very. When you say, oh well, the AI is so amazing. It can make personalized versions for everyone. And I don't know this. It sounds a little. This is why I hate venturing in the dorm room for like, like, sort of analysis. But this is. There is sort of a core assumption here that we do have preferences and differences that makes us unique and matters. That's why I invoked sex in here. What's the most base example I could have where I think people are gonna prefer humans over robots? Right. There's something beyond just a physical sensation. You want a sort of actual connection. And by the way, do you want a connection with just literally anyone?
A
No.
B
Ideally, you want it with, like, one specific person. And this is why these discussions are frustrating to me in a certain respect. Cause it's like, this is something we all know. But because it's like you can't articulate it in a spreadsheet or whatever, it just sort of gets flushed away and dismissed in these discussions. And we end up in this scenario where, yeah, people are envious. We have to worry about inequality even if they have everything. And I'm like, well, what if we have everything and people still want something else? They still want something different or special and unique. That seems like that's gonna matter too, right?
A
And that will lead to jobs for people providing those special and unique experiences. Yeah.
B
Well, I guess we have to. Maybe. Maybe we have to steer away from this next fart about that one a little bit. But that's a different discussion for a different part.
A
The oldest profession there is. There will still be a market.
B
That's why I was hesitant to. Like, that was such a good example to invoke the article by. Like, people could draw some connections here. That's going to be a little weird. But the. But. But I think this applies to a great many things.
A
All right. And that is the end of the free preview. If you'd like to hear more from Ben and I, there are links to subscribe in the show Notes or you can also go to SharpTech FM. Either option will get you access to a personalized feed that has all the shows we do every week, plus lots more great content from stratechary and the Stratchri plus bundle. Check it out and if you've got feedback, please email us at email at sharptech fm.
Episode: (Preview) The Economy in the 22nd Century, Amoral Tech and Silicon Valley Micro-Culture, What Nvidia Is Getting From Groq
Date: January 9, 2026
Hosts: Andrew Sharp (A), Ben Thompson (B)
In this episode, Andrew Sharp and Ben Thompson dive into futuristic speculation about technology, artificial intelligence (AI), and economic inequality in the 22nd century. Drawing on Ben’s recent writing and a holiday-circulated essay, they debate the philosophical and practical consequences of AI advancement—specifically, what could happen if AI becomes a zero-marginal-cost engine for all goods and services, and whether old human anxieties like envy or the need for social connection would outlast technological abundance.
On the absurdity of future-focused regulation:
"If that world arrives, I think everything is going to change. So like let's, let's relax a little bit here."
– Ben (10:51)
On zero-marginal cost goods and envy:
"So in this world, life sounds pretty grand. Everything’s taken care of. So why, why would we care about inequality? Because everyone has everything."
– Ben (12:51)
On the enduring value of human connection:
"...Humans, I think, like other humans... There’s something beyond just a physical sensation. You want a sort of actual connection. And by the way, do you want a connection with just literally anyone? No. Ideally, you want it with, like, one specific person."
– Ben (17:05–19:30)
On Instagram-driven envy:
"I do feel poorer than I actually am when I’m sitting there understanding what an F1 life looks like. And so, yeah, we have more access to the ultra wealthy these days. And it makes everybody pretty frustrated, right?"
– Andrew (15:00–15:24)
The episode carries Sharp Tech’s characteristic blend of irreverence, philosophical inquiry, and skepticism toward Silicon Valley dogma. Both hosts poke fun at futuristic thinking (“dorm room speculation”), but Ben’s optimism about humanity’s enduring value and Andrew’s relatable analogies keep the tone warm and engaging.
End of summary. For full episodes and more content, visit sharptech.fm.