Can Humans Maintain a Comparative Advantage Over AGI?
And how useful is the "lump of labor fallacy," really?
Maxwell Tabarrok recently argued on Substack that the comparative advantage of human labor will ensure that we never become worthless in the labor market, even if AGI can do everything we do. Basic econ, right?
Here’s an example similar to the one David Ricardo made in 1817: Sally is better at every job than Emma, but there’s only one Sally and only so many hours in the day. As a result, Emma can still work, and Sally can spend her precious time doing a job that almost no one else can do, even poorly. But no one should disparage Emma—she can still train, and still specialize, and still be very useful to society. In the world at large, there are probably more Emmas than Sallys, so Emma’s help is very much needed. This tradeoff consequently keeps unemployment rates low.
As Tabarrok notes:
This applies just as strongly to human level AGIs. They would face very different constraints than human geniuses, but they would still face constraints. There would still not be an infinite or costless supply of intelligence as some assume. The advanced AIs will face constraints, pushing them to specialize in their comparative advantage and trade with humans for other services even when they could do those tasks better themselves, just like advanced humans do.
Noah Smith made a slightly more refined version of this argument a few months ago, focusing specifically on computing power:
It doesn’t matter how much compute we get, or how fast we build new compute; there will always be a limited amount of it in the world, and that will always put some limit on the amount of AI in the world.
So as AI gets better and better, and gets used for more and more different tasks, the limited global supply of compute will eventually force us to make hard choices about where to allocate AI’s awesome power. We will have to decide where to apply our limited amount of AI, and all the various applications will be competing with each other. Some applications will win that competition, and some will lose.
As Zvi Mowshowitz cannily argues, however, Smith’s argument can’t apply to a truly unprecedented future that outfoxes the time-tested axioms of economics:
So yes, you can get there in theory, but it requires that compute be at a truly extreme premium. It must be many orders of magnitude more expensive in this future than it is now. It would be a world where most humans would not have cell phones or computers, because they would not be able to afford them.
Moreover:
Remember that we get improved algorithmic efficiency and hardware efficiency every year, and that in this future the AIs can do all that work for us, and it looks super profitable to assign them that task.
Exactly. What’s stopping AGIs from putting their noggins together and devising ways to improve their efficiency and self-replication capabilities? What if they find a way to get rid of constraints on compute? Isn’t that what humans are trying to do right now? Wouldn’t AGIs be better at figuring it out than us, given their inherent superiority? Couldn’t hyper-intelligent AGIs solve the problem of limited compute to a point where comparative advantages are practically nonexistent?
A plausible response: “Okay, sure, but the “lump of labor”/“fixed pie”/“zero-sum” fallacy will ensure that human work is still relevant, even if that outcome ends up happening.” In the words of Marc Andreessen’s “Techno Optimist-Manifesto”:
We believe that since human wants and needs are infinite, economic demand is infinite, and job growth can continue forever.
We believe markets are generative, not exploitative; positive sum, not zero sum. Participants in markets build on one another’s work and output. James Carse describes finite games and infinite games – finite games have an end, when one person wins and another person loses; infinite games never end, as players collaborate to discover what’s possible in the game. Markets are the ultimate infinite game.
Sure, humans can still do work that AGI is eminently capable of doing without help, and they can still eke out a teeny-tiny bit of value through their labor atop the AGI ad infinitum, like sprinkles on an ice cream sundae. Even negligible value added is value added, after all. Andreessen is correct. It’s all positive sum.
But does humans doing little make-work projects at the margins of the economy, for the sake of feeling “important” and valued, fit the conventional definition of “labor”? Is this the right direction for society to go in? Will it make us happy in the final analysis?
I think there’s good reason to think that even more “bullshit jobs” would make society worse. In his 2018 book Bullshit Jobs, David Graeber argued, pre-LLMs, that tons of these useless jobs already existed, that John Maynard Keynes was right about people only needing to work 15 hours a week in the future although he failed to account for the stratospheric rise of consumerism. (Graeber was wrong about the prevalence of bullshit jobs, as a 2021 study found, but his assertion that a “perception of doing useless work is strongly associated with poor wellbeing” was upheld by the researchers.) Graeber proposed universal basic income as a remedy to the problem he diagnosed.
“Bullshit jobs” are indeed bad, but when Graeber wrote his book seven years ago, he overestimated how common they were. He did, however, give us invaluable shorthand to make sense of the economic crater companies like OpenAI and Anthropic may be on the brink of leaving. (Graeber never lived to see the launch of ChatGPT.)
Another Cassandra was Andrew Yang, whose $1,000-a-month UBI plan made him a laughingstock of the Democratic Party. Unlike Graeber, Yang never argued we were already past the tipping point when UBI was an imperative. Instead, he argued that this tipping point would occur in the near future, and that precaution was a moral responsibility.
I remember reading his book The War on Normal People when he ran for president in 2019. Callow as I was at the time, I felt that his book went a long way in addressing the atomization and discontent that Donald Trump’s presidency had made the national mood. He talked about how displaced manufacturing workers were struggling to reskill, how young, unemployed men were spending too much time at home playing video games. Even though white-collar or even blue-collar automation didn’t seem like a very pressing problem at the time, I appreciated that he was trying to be prudent, and I counted myself among the “Yang Gang.” Five years on, Yang is the only political candidate I’ve ever phonebanked for.
I’d hoped Yang would advance to the final stages of the Democratic presidential primary for no reason other than his ideas gaining more visibility. Of course he wasn’t going to win—he was a total nobody focused on the “irrelevant” stuff—but maybe he would at least get people to think about UBI in, say, 2024. And as we all know, that didn’t happen. Kamala Harris was an empty suit, thrust suddenly into the spotlight after a belated baton handoff. With 100 days until the election, there was no bandwidth for big-picture thinking, and any coherent words that left her mouth were scripted by a coterie of out-of-touch Democratic operatives who were true believers in the Biden administration’s legacy. They had an utterly fictive mental model of the American people’s hopes and aspirations. They couldn’t even muster up a bit of Bernie Sanders-style populism as a foil to Trump’s.
Back in 2019, Yang perhaps did much better than he bargained for, making seven of eight primary debates. His moment passed when it became clear that Biden’s primary victory was assured. His subsequent campaign for the New York City mayoralty was a bust. This was his own fault, but one wonders what would’ve happened if he’d bided his time a bit longer and avoided running for office again so soon.
Andreessen hates UBI, by the way, and swore it off in his manifesto:
We believe markets also increase societal well being by generating work in which people can productively engage. We believe a Universal Basic Income would turn people into zoo animals to be farmed by the state. Man was not meant to be farmed; man was meant to be useful, to be productive, to be proud.
Proud of what?
Let’s say that democracy still functions in the future and UBI comes about, funded by the taxation of Andreessen and others, and people are empowered to spend their whole free time enjoying the “fruits” of automation. Sam Altman supports a UBI, so it’s not unimaginable that we might get one someday. What happens next? That’s what philosopher Nick Bostrom discusses in his recent book Deep Utopia: Life and Meaning in a Solved World. I haven’t read it yet, but on the basis of odds and ends I’ve gleaned from interviews with Bostrom, it mostly theorizes about a future in which humans have unfettered access to blissful simulated realities—largesse granted to us by aligned AGI.
As Scott Alexander correctly points out in his (mostly negative) review of Deep Utopia, none of Bostrom’s simulation guarantees happiness or “meaning” if it’s all fake, with Sisyphean struggles generated by the user also all being fundamentally fake:
The only sentences in Deep Utopia that I appreciated without reservation were a few almost throwaway references to zones where utopia had been suspended. Suppose you want to gain meaning by climbing Everest, but it wouldn’t count unless there’s a real risk of death. That is, if you fall in a crevasse, you can’t have the option to call on the nanobot-genies to teleport you safely back to your bed. Bostrom suggests - then doesn’t follow up on - utopia-free zones where the nanobot-genies are absent. If you die in a utopia-free-zone, you die in real life.
This is a bold proposal. What happens if someone goes to the utopia-free zone, falls in a crevasse, and as they lay dying, they shout out “No, I regret my choice, please come save me!”? You’d need some kind of commitment mechanism worked out beforehand (a safe word?). Should it be legal to have safe-word-less, you-really-do-die-if-you-fall zones? Tough question.
Alexander does cite one potential source of meaning in this fully automated “deep utopia” that I find legitimate:
Many philosophers have said that the most human activity - maybe even the highest activity - is politics. [He’s probably referring to Aristotle or Hannah Arendt here, if I had to guess.] Bostrom barely touches on this, but it seems like another plausible source of meaning in deep utopia. No matter how obsolete our material concerns, there will still be open issues like the one above.
And from a LessWrong post that went viral recently:
Politics might be one of the least-affected options, since I'd guess that most humans specifically want a human to do that job, and because politicians get to set the rules for what's allowed.
What is the most realistic application of that most human activity, politics, in a “solved world”?
Getting back the one desideratum that was sacrificed in the creation of the “solved world”: human agency. That is something a singularity fundamentally can’t solve for unless it partially recedes from view.
Restoring human agency would necessitate rolling back technological development to a certain extent. Not a large extent, but certainly some extent.
Andreessen is right: we shouldn’t become zoo animals. But as Erik Hoel put it, “No brilliant young entrepreneur could outcompete a major corporation with a new startup, since the major corporation will have so much more capital with which to just buy equally-brilliant intelligences and deploy them.” Andreessen just can’t square the overlap between corporate and state power.
I could make a Fukuyamian prediction that some sort of rollback is the logical end point of technological progress. But I’m not going to do that right now.