Kara’s on vacation this week, so we’re bringing you an episode of another great Times Opinion podcast, ‘The Ezra Klein Show.’
“The technological progress we make in the next 100 years will be far larger than all we’ve made since we first controlled fire and invented the wheel,” writes Sam Altman in his essay “Moore’s Law for Everything.” “This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly.”
Altman is the C.E.O. of OpenAI, one of the biggest, most important players in the artificial intelligence space. His argument is this: Since the 1970s, computers have gotten exponentially better even as they’re gotten cheaper, a phenomenon known as Moore’s Law. Altman believes that A.I. could get us closer to Moore’s Law for everything: it could make everything better even as it makes it cheaper. Housing, health care, education, you name it.
But what struck me about his essay is that last clause: “if we as a society manage it responsibly.” Because, as Altman also admits, if he is right then A.I. will generate phenomenal wealth largely by destroying countless jobs — that’s a big part of how everything gets cheaper — and shifting huge amounts of wealth from labor to capital. And whether that world becomes a post-scarcity utopia or a feudal dystopia hinges on how wealth, power and dignity are then distributed — it hinges, in other words, on politics.
This is a conversation, then, about the political economy of the next technological age. Some of it is speculative, of course, but some of it isn’t. That shift of power and wealth is already underway. Altman is proposing an answer: a move toward taxing land and wealth, and distributing it to all. We talk about that idea, but also the political economy behind it: Are the people gaining all this power and wealth really going to offer themselves up for more taxation? Or will they fight it tooth-and-nail?
We also discuss who is funding the A.I. revolution, the business models these systems will use (and the dangers of those business models), how A.I. would change the geopolitical balance of power, whether we should allow trillionaires, why the political debate over A.I. is stuck, why a pro-technology progressivism would also need to be committed to a radical politics of equality, what global governance of A.I. could look like, whether I’m just “energy flowing through a neural network,” and much more.