–Arts and letters for the modern age–

Cathode Ray Zone

–Arts and Letters for the Modern Age–

Preparing for the Age of Uselessness

by | Dec 14, 2022

The comedian Paul F. Tompkins once said that we live in a golden age of cowardice. What he meant was that the Internet allows for more people than ever before to anonymously throw brickbats at strangers. Along similar lines, I wonder if we’re about to enter an age of uselessness. 

Fundamentally, what I’m worried about is AI. There has been more and more discussion of programs like GPT-3 and ChatGPT. In brief, these programs can usually pass the Turing Test – that is, most people can, in many circumstances, be fooled by them into thinking they’re dealing with a human-level intelligence. Moreover, AI is often better at humans; not just at games like chess and go, but also at more primal abilities, like facial recognition. It’s hard to think of what AI is not going to get better than humans at, and soon. 

It’s not the mere fact that AI can be as good or better than humans at lots of things that matters; it’s also how quick it is. You can get nice-looking pictures made by DALL-E in seconds. You can have GPT-3 immediately produce a term paper good enough for college credit. And these are only the tip of the tip of the iceberg. Here’s what economist Samuel Hammond writes

… within a decade, ordinary people will have more capabilities than a CIA agent does today. You’ll be able to listen in on a conversation in an apartment across the street using the sound vibrations off a chip bag. You’ll be able to replace your face and voice with those of someone else in real time, allowing anyone to socially engineer their way into anything. Bots will slide into your DMs and have long, engaging conversations with you until it senses the best moment to send its phishing link. Games like chess and poker will have to be played naked and in the presence of (currently illegal) RF signal blockers to guarantee no one’s cheating. Relationships will fall apart when the AI lets you know, via microexpressions, that he didn’t really mean it when he said he loved you. Copyright will be as obsolete as sodomy law, as thousands of new Taylor Swift albums come into being with a single click. Public comments on new regulations will overflow with millions of cogent and entirely unique submissions that the regulator must, by law, individually read and respond to. Death-by-kamikaze drone will surpass mass shootings as the best way to enact a lurid revenge. The courts, meanwhile, will be flooded with lawsuits because who needs to pay attorney fees when your phone can file an airtight motion for you? 

In other words, AI is on the cusp of making humanity massively more productive in an innumerable array of fields, for good and for bad. In many ways, this is a good thing. To take just one example: I play Dungeons and Dragons on a weekly basis with my friends. In Dungeons and Dragons, you role play as all manner of fantastical characters; for instance, in one campaign I play in, my character is Uldress Wolfslayer, a wood elf ranger. Thanks to AI art programs, I don’t have to just imagine what my character looks like; instead, I can use programs like DALL-E to make a really neat-looking picture of my character so that now, everyone can see him. 

While this is really great for ordinary consumers, it isn’t so great for artists. DALL-E isn’t perfect yet, so there is still a place for artists who want to make money by designing T-shirts, storefronts, and the like. But 80% of the work they used to do can be done just as well, and immediately, and for free, by AI. This is not to say that there will be no place for artists in the near future. After all, surely at least some people will want something that they know is a product of the human imagination. But I would estimate that most of the time, people don’t really care about that – they just want a picture that looks a certain way, and aren’t concerned about its provenance. 

Artists, then, may soon become useless on an industrial scale. But soon enough, everyone will be in the same boat. For pretty much any endeavor you can imagine, it seems like AI will soon put its human competitors out of business. 

If this is right (big if, but bear with me), then it seems to me that people are becoming ever more useless. More and more, we won’t need us for anything. We will have entered the age of uselessness. 

How will we feel about this development? And just as important, how should we? 

One prediction about how we will feel about this development: very bad. In “Economic Possibilities for Our Grandchildren”, Keynes surmised that by 2030 we would solve what he called “the economic problem”—i.e., we would be able to ensure that all people could satisfy their material needs for food, clothing, shelter, etc. He worried, though, that once we solved the economic problem, we would be faced with a more difficult problem, which he called “the permanent problem of the human race”. This problem is the problem of what to do with ourselves once we no longer have to work to survive. In Keynes’s words, “I think with dread of the readjustment of the habits and instincts of the ordinary man, bred into him for countless generations, which he may be asked to discard within a few decades. To use the language of today – must we not expect a general ‘nervous breakdown’?”

The philosopher Bernard Suits also predicted something like this. In his book, The Grasshopper: Games, Life, and Utopia, the main character—the Grasshopper—predicts a utopia in which no one will have to work to survive. He had in mind a world in which machines do all our physical, intellectual, and emotional labor for us, leaving us free to do whatever we wanted. Since, however, there would be no work, the only thing we could do would be to play games. Indeed, even trying to work – say, building a table because you refuse to use machines to make your life easier – would itself be a game, for according to Suits (and oversimplifying a bit), “playing a game is the voluntary attempt to overcome unnecessary obstacles.” Since you don’t need to build a table—a machine could do it for you, better and more quickly—your insisting on building one yourself amounts to your voluntarily putting an unnecessary obstacle before yourself and trying to overcome it. I.e., it amounts to playing a game. 

Like Keynes, Suits worried about this utopia. Indeed, he seemed to predict that should we ever achieve such a utopia, we would try to destroy it, because we wouldn’t be able to bear to live a life where we were useless. He has the Grasshopper say: 

I saw time passing in Utopia, and I saw the Strivers and the Seekers coming to the conclusion that if their lives were merely games, then those lives were scarcely worth living. Thus motivated, they began to delude themselves into believing that houses made by people were more valuable than houses made by computers, and that long-solved scientific problems needed resolving. They then began to persuade others of the truth of these opinions and even went so far as to represent the computers as the enemies of humanity. Finally they enacted legislation proscribing their use.

So, we have two eminent thinkers worrying that the resolution of the economic problem will bring with it something like a crisis of meaning. All our needs will be met except the need to be needed. And the need to be needed is the most needful need of all. 

I’m beginning to worry, though, that this worry won’t pan out. I get this from interacting with the youth as their college professor. One of the things I have noticed about my students is that, while they want to get good grades, they don’t seem to care if the grades they get are at all indicative of actual quality. I’ll elaborate. 

I don’t grade based on attendance. Instead, I grade based just on the work students do for the class. That means it’s possible for a student to almost never show up for class and to nonetheless get an A. The students who manage to get A’s while doing this never seem to feel guilty about it. They seem happy that they got an A while doing very little work. 

Going further, I once asked a student which he would be prouder of: getting an A after studying really hard for it, or getting an A on a test after being told all the answers. He said he’d be equally proud in each scenario. I was dumbstruck. “Why would you be proud of an A for which you did no work?” He responded, “I got the A. That’s what matters.” 

For a long time, I thought that this student simply wasn’t vividly imagining what I was asking him to imagine. But the more I think about it, the more I’m coming around to the idea that I’m the weird one. 

What I’m getting at is that I think the following is possible: there will come a day when AI is better at literally everything than any human is, humans end up spending all their time interacting with AI, and almost no one minds. No luddite destruction happens. Instead, people just get really pleased, and they lived happily ever after. 

Strangely, this possibility depresses me. Apparently, I want people to be unhappy with utopia, and to have a major crisis of meaning. I tell myself that we are not mere playthings of nature, but are instead rational beings with who can and should conduct themselves in a certain way, lest we dishonor our dignity. But a world of people fully fulfilled by hedonism seems to give the lie to all that. If we can be happy while being useless, maybe we never had an elevated station at all. But we do have dignity, so we would be unhappy. 

In other words, I think the utopia-worriers—the people who fear that an AI-fueled paradise will be unsatisfying—are fearful because they think it should be unsatisfying. But should it be unsatisfying? 

Pets have guided my thinking on this question. I look at my cat, and I joke, “you get paid way too much.” The point of the joke is that I’m expecting more from my cat than he can give. Sure, he’s cute and I like petting him, but he doesn’t do anything useful, like killing bugs. Instead, he just lies around, gets some scritches, and licks his genitals. 

If the AI-optimists are right (again: big if, but it doesn’t seem impossible), then there will come a time when humans will be as useful as pets. Our use-value will consist almost entirely in our ability to entertain each other. And yet, I don’t think we’ll find that unsatisfying, because we’ll be living in castles in the Cloud, and that’s more than good enough for most of us. 

But should it be good enough? Should a world in which we live in the holodeck until we die dissatisfy us? Is the Cloud, as uplifting as it is, actually beneath our dignity?  

Here, I turn to Kant. As I understand him, people have a worth beyond price because they can resist their desires and set their own ends in accordance with the moral law. In other words, our dignity stems from our having autonomy, which in turn is the result of (or amounts to) our having free will and rationality. Interestingly enough, Kant imagined a utopia as well, which he called the highest good. It is the state of affairs in which (1) everyone is happy in proportion to his virtue, (2) everyone is perfectly virtuous, and so (3) everyone is perfectly happy. What bothers at least some of us about hedonistic utopia is that although everyone is perfectly happy, no one is perfectly virtuous, and so no one is happy in proportion to his virtue. In other words, everyone is happy, but no one deserves to be. Paradoxically, we Kant-sympathizers think that a necessary condition of our being worthy of any physically possible (i.e., AI-generated) utopia is  rejecting it if we ever got it.

To put the thought in other words: in utopia, we would all be like pets. Indeed, once in utopia, you have no way of being anything but like a pet. The problem is, pets don’t have the same worth as people. But what does it take to have the worth of a person? 

For most Kantians, what makes a person valuable is having the capacity for autonomy. You have a value beyond price because you can rise above nature. 

If this is all it takes, the we needn’t worry. If humans have the capacity for autonomy now, they’ll have it in the future too. Sure, they won’t exercise, but that’s only because they won’t need to. The AI will solve our problems, so we can put virtue out to pasture. 

However, Kantians think there is something bad about not exercising your autonomy. Your autonomy guarantees you have moral value, but your never exercising it means you have no moral skill. You are not living up to your value. So, such Kantians would be upset by AI-utopia. It’s no highest good.

There are only two solutions I see for the Kantian. The first solution is to be a prepper, but for morality. Preppers, in case you don’t know, are preparing themselves for Armageddon. They stock up on guns, ammunition, and food. They have a panic room, a bunker, and a generator. They’re training themselves to forage for food, to avoid sickness, and to learn how deal with physical threats. When the shit goes down, they’ll be ready.

A moral prepper is someone who foresees a future in which no one worries about morality anymore, because they think they don’t need it. Consequently, they don’t develop their capacities at all. Should the AI go away because of a coronal mass ejection, such people will be at a loss. The moral prepper, on the other hand, trains himself for this very day. He uses AI, but to improve his moral decision-making. He puts himself in simulations that force him to make difficult choices. He gets to the point where he would be able to do the right thing, even outside of a simulation. 

The other solution is not to worry, because nothing like this will ever come to pass. When my father was dying, I recall thinking, “it’s 2016! Shouldn’t our technology be so advanced that they can cure his cancer?” Maybe it should have been, but it wasn’t. And from what I know, any such technology is far away. AI is making us more productive, but it’s creating just as many problems as it’s solving. Thus, there will always be work for us to do and so we will always have to exercise our autonomy. 

I think this second solution is right, not because I know that an AI utopia will never come to pass, but rather because I foresee a period of intense chaos ahead of us, wrought in large part by AI. Whatever the merits of moral prepping, its usefulness right now seems merely notional, because a different kind of preparation is needed right now. Namely, we need to figure out what we think is important. In particular, we need to figure out how to think of people who usefulness gets superseded, which may one day include ourselves.