Scarcely a day goes by lately without another set of panicky headlines about the new generative AI technology. Will the AI render screenwriting obsolete? Or voice acting? And what is the best way to describe its cataclysmic potential? The metaphors range from Ezra Klein comparing the machine to a demon let loose from another dimension, all the way to the Wall Street Journal calling generative AI the "next iPhone moment"—which likewise speaks well to the technology's disruptive capacities, but which sounds a lot less scary, one must admit (we all just lived through the first "iPhone moment" a few years ago, after all, and while it brought significant changes to many industries, we survived, didn't we?)
One of the few things the prophets can agree on, though, is that the new technology—however apocalyptic or mundane it may ultimately prove to be—will almost certainly alter or displace many current jobs. How exactly it may do so, and whether the displacements will result in incremental productivity gains that are ultimately beneficial, or rather extremely painful mass layoffs, is less clear.
After all, the new technology—while astonishingly powerful in its ability to mimic certain verbal and cognitive tasks—is also error-prone and untrustworthy in its outputs. And while many boosters of the new tech see these problems as temporary imperfections that will be ironed out as the AI advances—pointing to the quantum-leap in sophistication that occurred between just the latest two versions of ChatGPT, released a few months apart from one another—I am not convinced the tendency to confuse fact and fiction can be eliminated entirely from these language models, given the way they work.
After all, the basic mechanism of the technology is to model the relationships between words and to use these statistical correlations to guess the next word (and then the next and the next) in a verbal sequence. What results are plausible-sounding sentences that mimic human language. But the machine is remarkably limited in its ability to retrieve specific information or to report facts accurately, because all it is doing fundamentally is synthesizing and paraphrasing vast amounts of human language on the internet to create something that sounds plausible, but that may not be accurate. If you seek a single concrete answer to something, therefore, a search engine is still a much more trustworthy guide than a chatbot.
A friend and I recently experimented to try to get the machine to look up a specific passage from Goethe's Faust, for instance (we were using the most advanced current version of GPT too). The machine did a remarkably impressive job of summarizing Part II of the drama, despite its nonlinearity; then of identifying and paraphrasing to us a different scene that resembled the one we had in mind, and which concerned the same characters. But it still wasn't getting the specific passage we wanted it to. We kept pushing it to cite the specific passage by using more and more narrowly-tailored prompts. Eventually, the chatbot just hallucinated a plausible-sounding but wholly invented passage that appears nowhere in Goethe's play.
Was this a mere bug in the program? Or is this a feature? If a language model is trained to guess statistically-plausible speech patterns, after all, then it will never really "know" the difference between a true and a false description. And if you push it to describe something sufficiently specific, about which there has not yet been enough written online for it to synthesize and paraphrase, then it will just make something up. To actually understand what it is saying, sufficiently to distinguish a true statement from a merely plausible-sounding but bogus one, would require the machine to be built on very different principles—in other words, it would require another qualitative advance toward true artificial general intelligence which, while certainly possible, is not a given.
We are therefore in an odd moment of transition when both the undeniable power of the new technology convinces us that it promises something utterly disruptive and transformative in the fabric of our lives; yet its limitations also make it hard to predict exactly where and how it will alter our economy first.
But this has not stopped either business or organized labor from trying to get out in front of it. Investors are pouring cash into the handful of publicly-traded companies with an early lock on the new technology; and it made headlines when Hollywood screenwriters negotiated in a recent contract dispute for restrictions on AI-generated content. The union leaders acknowledged that the concern about writers being replaced with chatbots was speculative at this point; and the main sticking points in the negotiations (which broke down and led to a strike) had to do with more conventional disagreements over pay, etc. Still, the negotiations mark the first time I know of that the risk of automation due to generative AI was an explicit topic in labor negotiations.
How justified are these concerns of potentially losing jobs to the new AI? The future is of course unknowable—and all the more so in the face of a new technology of undeniable power. Since I cannot claim to predict the future, therefore, the best we can do is look to the past—how have people confronted automation and the threat of disruptive new technologies before?
When looking for historical parallels to present-day concerns over automation, many writers and artists have been drawn to the fate of the traditional handicraft of weavers in the early nineteenth century. These weavers were among the first to be displaced by new technology in the early stages of the industrial revolution, and thus they are ground zero, as it were, for the tragedy of automation.
Nineteenth century writers like Heine and Hauptmann portrayed the misery of the Silesian weavers, pushed to the brink of starvation through competition with the new steam-powered looms that could work much faster. But in Hauptmann's The Weavers, the threat of automation itself largely occurs off-stage—it is a hidden force disrupting the traditional economy of the weavers and driving down their wages, rather than an imminent and visible threat to their livelihoods and traditional way of life. For a more direct rendering of the drama of automation itself, set in the country where the industrial revolution began, I turned to Ernst Toller's 1922 play, The Machine-Wreckers, which reflects back on the events surrounding the Luddite rebellion that had occurred roughly a century before.
Toller's play has an undeniable simplicity and emotional power that makes it beautiful, and it is rendered especially potent by the tragic life of its creator—Toller's fate as an exile from Nazi Germany, who gave his last pennies to Spanish Civil War refugees before his untimely death, has rightly placed him among the ranks of the left's secular saints of the previous century.
The weakness of the play, though, certainly lies in its (almost unapologetic) didacticism. The central character, Jimmy Cobbett, is a wandering activist who arrives in Nottingham to try to convince the displaced weavers to accept revolutionary discipline, bide their time until the conditions for revolution are ripe, and channel their disgruntlement about automation into a broader critique of the capitalist system. He is portrayed as wiser, braver, and more intelligent than everyone else, and the audience is seldom invited to question whether he is right. The tragedy is simply that the "machine-wreckers" do not listen to him.
For all the literary imperfections of Toller's play, however, it has a great deal to tell us about our present moment of renewed anxiety over the threat of automation. When the new power loom arrives in town, for instance, the weavers at first regard it with superstitious dread and quasi-religious awe. "Perhaps it is God. It may be God," says an old worker. One is reminded very much of our present day existential anxiety in the face of generative AI—have we created a demon? A supernatural being? Will the machine learn to augment its own intelligence and eventually become a superhuman techno-deity?
Most of the weavers blame this fearful and almost supernatural machine for all their problems, rather than the employers who have placed it there, or the competitive economic forces that compelled them to do so, so they end up attacking the engine.
Toller's argument—expressed through the heroic Jimmy Cobbett—is that the Luddites of the early nineteenth century had misidentified their true foe. The machine was not actually the problem, he argues—to the contrary, it offered the promise of liberation from drudgery. No longer would they be forced to toil behind a hand loom. The machine could free them instead to—do what exactly? Cobbett is somewhat vague on this point, but it appears to involve frolicking in the woods. "Do you still know that there are forests? [...] Forests where men pray? Forests where men dance? What is your trade to you? [...] Your work was drudgery [...] What if instead of sixteen hours you worked but eight. With the machine no more your enemy but your helper! What if your children, freed from drudgery, grew up in sunny schoolrooms, gardens, playing fields?"
In these regards, Toller and his mouthpiece Cobbett sound a bit like our present-day Silicon Valley socialists and techno-utopians, many of whom aim explicitly at the elimination of jobs through automation, precisely because they believe this will free humanity from the burden of work. Of course, the problem—in the early nineteenth-century as now—is who will feed people after they have been deprived of their livelihood? We will be free to do what—to starve?
The Silicon Valley socialists—many of whom work as AI researchers themselves—think that either the government or private actors will be forced step in to provide universal guaranteed income to the people displaced by the new technology. (Much as early-twentieth century socialists like Toller believed that if the machine and other instruments of production could be nationalized and placed into the hands of the workers, they would serve humanity.) Sam Altman, for instance—one of the executives and founders at OpenAI—maintains that profits from the venture will be used to set up a trust fund that will eventually sustain human life, once artificial general intelligence has rendered the human brain obsolete as an instrument of labor.
Most ordinary people, however, would distrust this vision—whether from Toller, Cobbett, or from Altman. For one thing, it places us utterly at the mercy of the largesse of our future techno-overlords. In so doing, it relies on some implicit beliefs about the generosity of human beings that may not be supported by history or experience. As Ted Chiang wrote in a recent piece in the New Yorker, the argument that technology displacing jobs is actually a good thing, because once it has proceeded far enough the government will have no choice but to establish a universal dole to sustain our lives indefinitely, is a form of "accelerationism" that rests on an implausible set of assumptions about human nature. And—even more disturbingly—it seems to countenance an astonishing level of human misery in the short term as a necessary evil.
Finally, there is the problem that most of us want to do something productive with our lives that can't just be replaced by a machine. I have little appetite for dancing in the woods—and still less for schoolrooms; and perhaps the weavers listening to Cobbett's sermon felt the same. Reading this part of the play, I was reminded of a recent conversation with a friend who lives in the Bay Area, in which he described an hallucinogenic party in the woods he attended that we both took to be representative in some fundamental way of the California ethos. "A glimpse into our future," I said—"once AI has displaced all the jobs, we'll be free to take drugs and romp through the forests!" "Yes," my friend agreed, "while we wait for our checks to arrive from the OpenAI trust fund!"
The point is, we were kidding: no one actually wants this future. I would get bored by the second day, as I suspect most people would.
The techno-utopian vision is not the only possible argument in favor of automation, however. Another one also appears in Toller's play—though this one is put into the mouth of Cobbett's villainous and treacherous brother Henry, who has sold out to the masters in order to secure the coveted position of overseer. Henry's argument—the same one that would be recognized and used by most economists today—denies the premise that both the Luddite machine-wreckers and the techno-utopians share—namely, that the rise of the machine will replace human workers. To the contrary, Henry argues that it will instead ultimately create more jobs, by increasing efficiency and spurring demand. Once the power looms have lowered prices for commodities, Henry argues, then there will be more demand for their products that ever before. Even bigger factories will have to be constructed, with an even greater need for workers to staff them.
This argument, even though it is voiced by the play's antagonist, has more of history and experience on its side than the techno-utopian vision. After all, it does accurately describe what has eventually happened in the wake of every prior wave of automation since the start of the industrial era. Each new leap of technological progress displaces some workers in the short term, to be sure—but eventually, the historical record shows, it has generated more jobs in aggregate in the same industries, because it made those industries more efficient.
Ted Chiang, in the piece cited above, disputes this—arguing that the information technology revolution of the last few decades has done relatively little to improve average standards of living. He argues that the truly decisive factor is whether or not government policies redistribute the gains of technological advances.
And while there is much truth to this, no doubt, the lack of progress for ordinary people emerging from the digital technology revolution could be explained from the opposite direction. Robert Gordon, for instance, has argued that this stagnation in living standards emerged not from the fact that automation was driving people out of work, but precisely the opposite: digital technologies were not actually increasing productivity at the rate that previous technological advances had done, and this—rather than failures of redistribution—account for why the earlier industrial breakthroughs were accompanied by increases in average living standards, and why this one was less so. In other words, the problem with the digital revolution could be precisely that it was causing too little automation, not that it was causing too much.
However plausible this economist's account of the long term impact of automation may be, however, it has at least one of the same defects as the "accelerationist" techno-utopian vision: it seems to accept as a necessary evil the near-term sacrifice of individuals for the sake of later progress in aggregate. Even if it is the case, after all, that the use of steam-powered looms did indeed create more jobs than it displaced, over the century that followed, this did nothing to help the individual laid-off weaver, who had spent their lives perfecting a single skill that was now otiose; who would struggle to make the transition to a new form of work; and who might and probably did perish while waiting for the promised economic millennium of greater abundance.
I am left feeling, therefore, that the suspicious weavers whom both the prophetic socialist Cobbett and the employers denigrate as backward and mistaken were actually closer to the truth. They recognized that automation was a threat to their livelihoods and the sources of meaning in their lives. To be sure, the employers offer hope in Toller's play that some, at least, of the weavers might not be driven out of work, even in the near term. They assure the workers that they will still need some human employees at least to mind the machine, maintain it, and keep an eye on its outputs. But the workers in Toller's play find this a poor substitute for the skilled craft they have spent their lifetimes perfecting.
One is reminded mightily of the sorts of promises employers are making today to knowledge workers who are worried some of their more rote tasks might soon be replaced by chatbots. The Wall Street Journal cites one source who recommends that employers should use ChatGPT in its interviews with prospective engineers. Instead of asking applicants to write code themselves, the source argues, they should be asked to plug the task into the AI and then workshop the resulting output: "Ask applicants to use ChatGPT to solve a problem, and then have them critique the answer it spits out. Does the code have any security vulnerabilities? Is it scalable? What’s good or bad?"
This may indeed be a plausible way in which human engineers will continue to be employed in the future, alongside the new generative AI algorithms. But it might also be far removed from what engineers see as the source of pride and meaning in their work, or the reasons they became engineers in the first place. One is tempted to rejoin the WSJ source's advice with the same words as Toller's weavers, when they are informed they could secure new jobs as minders of the machine. As one of them—Albert—retorts: "And if they leave us at the engine, what sort of drudgery will it be? We shall tie up broken threads and tend the hungry beast like prentice farmhands?"
Even if automation really does pose a threat to people's livelihoods and sources of meaning, however, that does not mean it can be easily stopped or reversed. The Luddites ultimately failed in their rebellion—destroying one machine did not prevent another from being constructed elsewhere; and so long as the machine existed in one factory it would prove a threat to the workers everywhere. Even those employers who refused to automate would have to contend with competition from those who did, and the weavers' wages and employment would suffer indirectly as a result (as they do in Hauptmann's play.)
A technological innovation is notoriously hard to undo, in other words. As a character in Friedrich Dürrenmatt's play The Physicists, which metaphorically deals with the atomic bomb, puts it: "what was once thought can never be unthought." (Kirkup trans.) One is reminded too of Warren Buffett's words on the subject at a recent shareholder meeting of Berkshire Hathaway. Speaking of generative AI, he said: "It can do all kinds of things, and when something can do all kinds of things, I get a little bit worried because I know we won’t be able to uninvent it." And who could disagree with this anxiety?
One unpleasant possibility that strikes me all-too-plausible is that the AI does not displace all the jobs and promptly usher in either a dystopia or utopia of universal basic income, nor does it quickly and seamlessly lead to productivity gains that result in still more jobs and a higher standard of living, but rather combines with a looming economic downturn to cause incremental displacement and wage cuts—a slow slide into stagnation and misery. Ted Chiang for instance, in the New Yorker piece cited above, suggests that AI might become the next version of McKinsey. The rise of management consulting, he argues, served employers as a handy way to cut costs, lower pay, and eliminate jobs, while displacing responsibility onto outside advisors. AI could do something similar, Chiang argues; namely—provide employers with a convenient excuse to do things they would otherwise want to do anyways: such as drive down wages.
I could see the next few years proceeding like this: suppose the Federal Reserve's interest rate hikes and the risk of default from the debt ceiling drama in Congress combine to push our economy into the long-feared recession. Many employers would be forced to make lay offs and eliminate positions during the downturn; and some might be tempted to seek ways to replace as many of these positions as possible with AI automation, rather than hiring workers, as a way to keep down costs.
Our society then would almost certainly panic over the new technology and blame the machine for what was really a straightforward—and fixable—failure of macroeconomics: because policymakers first refused to increase supply by allowing more immigration, it led to an excess of demand, sparking inflation, thereby prompting the Fed to intervene to lower demand through interest rate hikes. The resulting recession will then occur at the same time as Republicans in Congress are seeking to cut government spending, and in which they will have every incentive to block any fiscal stimulus in order to deny the Biden administration a swift economic recovery shortly before the next election, all of which will needlessly prolong the recession—just as it did after the 2008 financial crisis.
Meanwhile, AI will be replacing jobs, but that is merely a side issue: employers would hire more human workers again if the economy improved. After all, we should remember that after 2008, there were similar fears that companies had used the downturn as an excuse to automate professions permanently, and that even if the economy later improved, those jobs would never return. Yet, by just a decade later, our economy had the opposite problem again. The economy was producing too many jobs for human workers and was not finding enough people to fill them, causing inflation. The risk, then, is not so much from automation as it is from self-interested and short-sighted policymakers prolonging economic misery needlessly, in order to score political points and defeat a Democratic candidate in 2024.
And this is a sense in which, then, Toller and Jimmy Cobbett are actually right: the machine is not the workers' enemy, and they had misidentified their true foe. The true problem is the failure of political leaders to take even the simplest steps to reduce the misery of their constituents, when it is not in their own short-term electoral interest to do so.
No comments:
Post a Comment