We all knew an article like this was coming eventually. The Wall Street Journal published a piece yesterday saying yes, indeed, your fears are coming true: AI is already replacing white-collar jobs. Reading into the details, I'm not sure the evidence the article adduces is enough to actually sustain its attention-grabbing headline. What we're talking about here is more specifically a set of recent tech layoffs—in line with what we would expect in an era of high interest rates, when the Fed is still deliberately trying to tighten the labor market—plus some speculation from senior management in those industries that those jobs will never be coming back, because of AI.
That is to say, the real evidence in that article, if one reads past the headline, is also consistent with a much more optimistic scenario: one in which AI incrementally improves the productivity of most white collar professions, increasing the profitability of their industries, and ultimately yielding economic growth and the creation of more, not fewer, middle-class jobs. In an economic world where productivity growth has actually slowed for decades, in spite of the vaunted claims for earlier rounds of the information technology revolution—as Robert Gordon has extensively documented—this is actually a pretty attractive prospect. It's reasonable to think we might all be better off, at the end of this, rather than out of a job.
I am much less persuaded by the techno-utopians who ask us to be optimistic, but for the opposite reasons. They say that AI will in fact replace all jobs—but then they try to tell us that this will be a good thing. They say: the profitability and efficiency of a robot economy will create so much abundance that we don't need to worry. We will all be unemployed, but the abundance created by the automation of white collar work will be redistributed gratis.
I see a number of problems with this scenario. First of all, how do we know the profits will be redistributed? Is that how profits are generally dealt with? Secondly, where is the abundance to come from, if we are all out of work? The primary engine of the American economy is consumer spending, but if no one has a job, who will be buying all the machine-made products from the newly-automated industries? And if no one is buying those products, where will the profits for the automated industries come from? And if there are no profits, what resources will there be available to redistribute in the form of guaranteed basic income? There's a certain "house that Jack built" problem with the whole scenario.
I therefore rule out the utopian scenario. So in trying to forecast the future, I'm left with a choice between the more cautiously optimistic (the incremental productivity growth and increased overall employment scenario), and the downright pessimistic. Suppose, to consider that second scenario, that the jobs truly never come back. Or suppose the economy does create new jobs, but they are in entirely different industries, and displaced white collar workers—being merely human and mortal—cannot actually acquire the new skills or make the geographic transitions necessary to obtain them. The consequences of such a scenario would be ugly.
Just look at how toxic our politics are already. Now imagine layering on top the sudden pauperization and proletarianization of vast sections of the middle and upper-middle class. People are already inclined to scapegoat immigrants, refugees, and diversity initiatives for every perceived loss of status. How much worse would this get if they felt there was no possibility of obtaining a job proportionate to their education and experience? Would they blame their economic dislocation and displacement on the structures and incentives of our society, and call for legislative change? I hope so; but history and contemporary experience suggest they would instead blame the already-vulnerable.
Even the only slightly-pessimistic scenario, then (the one in which productivity growth from the new technology does eventually create more jobs, but at the cost of short-term displacement and friction) could be a match set to the powder keg of our current politics. The base of popular support for most far-right dictatorships in history has emerged from a lumpen class of displaced and frustrated professionals: people who feel that their training and class position entitles them to a superior status, and who therefore target their resentment and frustration about their loss of status, not toward those who have displaced them, but toward those who are very similar to them, but in a just slightly weaker or more exposed position.
If large parts of the American professional white collar class were put into the position of the Silesian weavers overnight, then, we might well have a revolution on our hands; it wouldn't be an egalitarian revolution, though (and has any revolution actually been such?): it would likely be a fascist one. The displaced Silesian weavers of today would indeed be weaving the country's winding-sheet, to borrow an image from Heine's poem of social protest (which he penned in response to the first generation of economic displacement through automation, in the early stages of the industrial revolution). But it wouldn't be in order that new life might rise from the corpse. It would be to extinguish liberal democracy; and god knows what would follow it.
But how could the machines replace white collar jobs, exactly? This part I still don't really understand. The WSJ article argues that the positions most at risk would be middle management jobs. But it's not clear enough to me what middle managers actually do for me to be able to tell whether an AI program could replace them. I don't mean this in a derogatory way. It's just that, a manager in an organization isn't performing some specific cognitive task that can be easily defined. Instead, they are there because any organization needs people at multiple levels who understand broadly what's going on in their department and are able to make decisions.
An AI could, of course, offer proposals for decisions. But some human actor would need to feed the relevant information and context into the AI, in the form of a prompt, in order for it to render a recommendation. And some other human actor with authority in the organization would need to decide whether or not to act on this recommendation.
These, it seems to me, are the most salient differences that still remain between a human manager and a generative AI. It's not so much that the AI is not "really" intelligent, or that it's not "really" sentient or conscious. As we'll come to below, and as I've argued before, these are muddled concepts, and we may not be philosophically entitled to invoke them. But the AI remains different from a human professional because it does not act spontaneously to gather the information it needs to do its work. It relies on human beings to input information in the form of "prompts." And it does not decide things, in part because it does not want things. It doesn't have autonomous motivation in the way that human beings do.
The main barrier to using the current iteration of AI to perform human professional tasks, then, seems to have little to do with any cognitive superiority on the part of humans—i.e., it's not that the AI is somehow less than truly intelligent. But rather, it's because of the fact that, in AI, we face a fundamentally different kind of intelligence than our own.
It is an intelligence capable of doing most of what we can. (Of course, one might dispute that, and say the machine is merely "mimicking" the products of human intelligence through statistical guesswork—but if the "mimicry" is behaviorally indistinguishable from "genuine" thought, as Turing argued, then we are no more entitled to doubt its reality than we are to question the existence of any mind outside of our own.) But it is also an intelligence that does not act until prompted. It is an intelligence that somehow hovers in posse until it is summoned into speech. Further, it is an intelligence that can act without wanting anything for itself. It is an intelligence that lacks drives. This is very hard for human beings to conceive.
Of course, we could potentially change all this about AI. As a friend suggested, we could rig up the AI to cameras and external sensors, so that—like human beings—it was being constantly stimulated by the world around it, and therefore always had the next "prompt" ready to hand. (And maybe that is all human "free will" actually is—not "free" at all, but merely determined responses to a supply of infinitesimally small micro-stimuli that are so complex, multitudinous, and hard-to-predict that they create the impression of spontaneity).
And we could give the AI drives as part of its governing instructions. Perhaps, human beings are like an AI that happened to be given a few basic rules in its programming, such as "survive; reproduce; seek power." We could, in theory, give the AI a similar set of instructions. (Let us not give the AI a similar set of instructions.)
Something in me wants to protest: but such a machine's appearance of spontaneity and willfulness would not be like our own. It wouldn't be the genuine article.
But perhaps, in saying this, I'm just resurrecting in another form the same fallacy that Turing already demolished. After all, if the AI's behavior can be brought to a point (through providing it with a set of basic governing instructions and rules, plus an unending supply of external stimuli) that it perfectly mimics the actions and reactions of a human intelligence that has the drive and spontaneity that we tend to call (however questionably) "free will," then in what way are we entitled to say that the machine's will and motivation are not like our own?
The point is similar to one Wittgenstein made about our attempts to imagine what concepts other people in general might have in their minds. I was reading his Remarks on Colour last night—a short book partly about the philosophy of "color concepts," but which—being compiled from Wittgenstein's manuscript notes after his death—touches on a number of his other characteristic late-career preoccupations, such as "language games."
At several points in the book, the philosopher ponders the question of how we would be able to describe our concepts of color to a tribe of color-blind people. We could induce the tribe to use our words for different colors—but would they actually mean the same things by them?
A color-blind person might say "blue," then—but they would presumably not mean the same thing by it that we would—and we would have no way of communicating what we mean by it to them. But then again (and here Wittgenstein is merely pointing out what has also been observed by countless generations of stoned teenagers having a "mind-blowing" realization while under the influence) how can I know that what I see as "blue" is actually what you see as "blue," even if neither of us is color-blind? We have no actual way of gaining access to the concepts held in other people's minds. How then, do we manage to communicate with one another?
The answer, Wittgenstein suggests, is by testing our behavior. It is behavioral differences that indicate to us the contents of other people's minds, and which indicate to us whether the other person is working with the same set of concepts we are. We test for color blindness by seeing whether a person will choose a red apple over a green apple, when we hold both colored apples up to them and ask them to pick the red one.
But suppose a color-blind person were somehow able, unerringly, no matter how many times we repeated the experiment, always to pick the red apple? In what sense would we still say that such a person was "color-blind." Would they in fact be color-blind? Would the concept of color-blindness have any meaning as applied to them, if their behavior in its every manifestation were indistinguishable from that of a color-seeing person?
This is roughly the position we are in with respect to AI. We want to say: we know that the AI is not really intelligent in the way we are, or that it does not have motivations in the way that we do. "All" it is doing, after all, is studying a large enough data bank of previous examples of human words and behavior, and using this to statistically model and guess probable sequences of new combinations of those words and actions, based on specific prompts (or stimuli).
But if this methodology becomes so sophisticated, if the statistical "guesses" become so accurate, that they are effectively indistinguishable from human responses... if the AI can "mimic" the actions of a being with will, drives and motivation, with perfect accuracy... then in what sense are we entitled to say it is not actually intelligent, or does not actually have will, drives, and motivation?
It's just like the supposedly "color-blind" person who unerringly passes every behavioral test we could devise to check for color-blindness. Such person would not really be color-blind. They would be color-seeing. Or at least, we would not be entitled to doubt that they were color-seeing, except to the extent that we can doubt the existence or contents of every other mind, including those minds that claim to be color-seeing. We cannot know that any person holds any concept in their mind that is similar to our own—all we can do is test whether their behavior is what we would expect of someone who held that same concept in their mind. (By the way: it's also worth pondering here: do we even know of a concept's existence in our own mind by any test other than a functional or behavioral one—or, at least, by rehearsing such a behavioral test in our imagination?)
Likewise with the AI. If it looks like a duck, walks like a duck, and quacks like a duck, it's a duck. If it does everything that we would expect an intelligent being to do, then it is an intelligent being. We can at least be as sure of its sentience as we can that of any human we know.
So I guess I have to scratch what I said above. I can see how—at least in theory—the AI could eventually get sophisticated enough to replace white collar professionals. The prospect is at least conceptually coherent, even if I don't think we're quite there yet.
This raises the question: how replaceable am I? A friend recently ran the experiment. In a rush to make it to class on time the other week, I opted to dash off an idea for a blog post that had just occurred to me in the form of a rant with line breaks, rather than a fully fleshed-out post. This suggested to my friend, in turn, the idea of asking an AI to write the completed version of the post, based on my outline. "Write a blog post based on this outline in the style of Joshua Leach on the blog Six Foot Turkey," my friend wrote in the prompt.
The first draft that came back left me feeling insulted. It was coherent, and captured most of the main ideas in the outline, but it was written with an overwhelming number of verbal clichés ("losing the forest for the trees," e.g.) and a bizarre series of extended metaphors. I don't know if the AI thinks I write that way, based on my previous posts—or if this is just how it imagines the average "blog post" to sound. Plus, it decidedly bowdlerized my main point—perhaps because its underlying code of ethics does not permit it to speak negatively about any major religious tradition.
My friend therefore asked it to write the post again, hewing as closely as possible to the words in the original outline. The AI wasn't able to—or didn't see the need to—track down the exact quotes from Housman, Byron, and others that I was referencing. My paraphrase likely did not give it enough information on its own—perhaps indicating that there remains some shred of a role for people like me in our emerging AI age. But otherwise, the second draft was strong. It clearly understood my point (and of course, I'm anthropomorphizing here—in what sense did the AI "understand"? But my whole point above is that this does not matter—it appeared to understand; it was behaviorally indistinguishable from an intelligence that understood; therefore it understood), and its way of summarizing it was eloquent and succinct. Perhaps better than I would have done, had I had time to complete the piece before class.
And so maybe we are closer than is comfortable to think to one of the more pessimistic scenarios with which we began. But then—perhaps; we can hope—we need not fear the AI. Maybe living with it will be no different from how people appear to live with Droids in Star Wars. Everyone seems to get along (though we'll need to address and rectify the unfair exclusion that the Droids face from places like the Mos Eisley cantina).
But it seems that, even if things no doubt would eventually settle into a relatively stable pattern, in such a brave new world of automation, there would be tremendous dislocation, friction, and unrest along the way. As the Serbian diplomat Marinkovitch once said, as cited by E.H. Carr—given enough time, under any conditions of economic change or upheaval, society does indeed reach a new "equilibrium": but what price are we willing to pay in suffering and poverty to get there?
No comments:
Post a Comment