Saturday, March 18, 2023

AI Melancholia

 The past few days I've felt another wave of AI-related melancholia wash over me. Here I was thinking I had fully processed and already put aside my feelings on the subject. In one post after another the last few months, after all, I have catalogued my thoughts and reactions to the dawning of the AI age, doing my best to maintain perspective on the potential dangers of the new technology without taking at face value every extreme claim made on behalf of its purportedly revolutionary powers. 

No sooner had I put these thoughts behind me, however, than the makers of the current frontrunner of the AI race announce another, vastly more sophisticated version of the same tech; and I am back to reassessing all of my assumptions and evaluations all over again. The same fears return: is this the end of an era in human life?; was 2022 in some sense the last normal year we'll ever enjoy—and if so, should I have appreciated it more?; will I still be able to derive meaning from my life when many of the activities I value most might prove replaceable by machines? 

Dark and lonely thoughts; but a friend made sure I saw a recent column by Ezra Klein that at least proves I am not alone in these fears. Klein wrestles, he admits, with his own skepticism of the most grandiose claims made for the new technology. He acknowledges that, in the presence of AI researchers and their VC funders, he feels as the rest of us often do in the same company: that these people are inhabiting another reality entirely. To many of us, that makes them seem like deluded sufferers from a Messiah complex. They often appear that way to me. But Klein—without dismissing this judgment—asks us to suspend it for a moment in order to take seriously a more disturbing possibility: what if they are right? What if AI really does mean the end of the world as we have known it? 

In a book that influenced me greatly in my twenties, the psychiatrist Robert Jay Lifton discusses the similar questions that beset him about nuclear weapons, in the closing years of the twentieth century. Examining the recent history of doomsday cults like the Japanese extremist sect, Aum Shinrikyo, Lifton notes that in the past, there was a clear line of rationality separating the worldviews of apocalyptic/millenarian cults from those of non-believers. The impulse toward apocalypticism, Lifton argues, is founded in a fear of mortality, and a resulting conflation of the impending death of the self (and the universe of consciousness each self contains) with the annihilation and extinction of all humanity. 

What's troubling about the nuclear age, Lifton argues, is that it has become increasingly difficult to maintain this distinction. Starting in 1945, after all, humanity began to possess the technological means to annihilate itself. Thus, the worship of the apocalypse was no longer purely an exercise in myth. People with sufficient destructive will could actually bring about the death of humankind. It had become possible—for the first time in history—for someone who had conflated their personal mortality with the impending destruction of all the world to actually realize these dreadful possibilities through their own action—all they would need are enough atomic weapons, and the world's nuclear arsenals already possessed (and still possess today) sufficient explosive power to accomplish this ultimate extermination many times over. 

In the face of yet another potentially destructive and phenomenally powerful technology—AI—we once again face an erosion of boundaries between the "reasonable" and the "paranoid" worldview. The belief of many AI researchers that they are unleashing a superhuman technology with the potential to transform—and possibly destroy—the world as we know it may seem to us the grandiose delusions of a micro-cult of self-important technologists, drunk with their own sense of destiny. Surely, they are being the "paranoid" ones, and we who doubt them are the reasonable and dispassionate observers. Yet, to Lifton's point, a sufficiently destructive technology can erase these comfortable distinctions. There are human creations of the modern age of almost unlimited destructive potential, and the possibility of human beings one day eliminating themselves through their own creations cannot be dismissed as a paranoid fantasy. 

Those of us who have endured some form of severe anxiety or depression in our lives—and it is probably the majority of humankind that has done so—have perhaps learned to survive these afflictions by distrusting the most paranoid construction our minds can place on future events. We learn to ignore the voice of fear, that of the worst case scenario, because it is so often proven wrong. We may have been afraid of heights, say; or we may have been beset at one point in our lives by the image of a vast structure we inhabit suddenly giving way beneath us or crashing down on our heads. We learn through experience that these terrifying visions seldom come to pass; but more important than the statistical improbabilities of the matter (which we never doubted, even at our most anxious), we realize that these are visions of an unknowable future, and therefore can never be proved or disproved. Meanwhile, we are still here in the present; no building has crashed down on us. So we learn to subsist here in the moment, and not to listen to the paranoia. 

Yet here is Ezra Klein suggesting we should start taking that paranoid voice more seriously again. No doubt these heuristics we have learned to dismiss most of our worst case scenario hypothesizing work for 99% of the dangers we face in daily living—but what if, Klein asks, AI really does belong that that 1% of outliers—the technological dangers that are every bit as menacing as they are cracked up to be? We have Lifton's example of nuclear weapons before us to remind ourselves that the distinction between irrational personal fears and rational assessments of real-world dangers cannot always neatly be drawn. The psychotherapist can tell the paranoiac that their individual death will not mean the death of the human species; but they cannot honestly tell them that the death of the human species is not a possibility within their lifetime. The existence of world-ending technologies therefore poses a source of fear that cannot be argued away with cognitive behavioral methods. 

If this is the case—and we are, with Klein, merely opening ourselves to the possibility, not endorsing it as inevitable truth—then the next question naturally arises: why are people building these machines? Klein asks this of the AI researchers, many of whom—he records—rate the possibility of the new technology wiping out humankind as somewhere in the range of 10% or higher. Why, he asks them, would they devote their lives to creating something they believe has a one in ten chance of destroying humanity? 

The answer he gets is the plea of inevitability. In another form, it really amounts to the same rationale people always provide for the maintenance of a nuclear arsenal: if we don't have one, someone else will. The creation and proliferation of this technology is inevitable. Would we not rather exercise some control over the direction it takes; would we not trust ourselves more to be the ones to develop this technology, than someone else; therefore do we not have an obligation to build it first, since no force can possibly arrest its onward development? 

As effective as this argument may be as a salve to the conscience, however, it is not obvious it is true. I'm not sure it's really the case that the development of technology in a particular direction can never be halted or slowed. The innovation of new technology is one particular manifestation of the human will to power, and—as Robert Frost once wrote in a poem that deserves to be more widely known, in our modern age of automation—there is no reason why it couldn't be curbed within certain limits, as other manifestations of the will to power have been by the forces of civilization: "Political ambition has been taught,/By being punished back, it is not free:" writes Frost. "It must at some point gracefully refrain./Greed has been taught a little abnegation/ [... So too,] None should be as ingenious as he could,/Not if I had my say. Bounds should be set/To ingenuity for being so cruel/ In bringing change unheralded[.]" 

One hears more than a little of Frost's quixotic plea on behalf of stasis in the second of two proposals Klein forwards in the closing appeal of his column: "One of two things must happen," Klein writes. "Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies." Something, that is, like a non-proliferation treaty for AI. Which—if we take seriously at all the comparison we have made above to the destructive potential of nuclear weapons—is in no way an absurd suggestion. 

Even as I seek to honor Klein's plea that we take seriously the more extreme hypotheticals about AI, and not just dismiss them out of hand as paranoid and megalomaniacal delusions, I still hear the voice of my "reasonable," heuristic-deploying self speaking from the wings: the self that learned well the lessons of its cognitive behavioral therapy, and which still says: but come now, humanity has made it this far. Transformative change has come time and again, at least once per generation. This will surely be another. I have no doubt it will be momentous. But momentous change does not necessarily need to be in a worse direction; nor, even if it poses new challenges, need it be an unmitigated disaster. Such change has never annihilated before all that made life worth living. Yes, Klein replies, I agree with you about the past—but what if this time it really is different? 

The only knowledge I can fall back upon is the one key insight of the post-anxious brain I described above: the future is unknowable. The AI may destroy us all; or it may simply evolve at such a rate that it outstrips humankind's capacity to adapt itself fast enough to keep up. Or it may not; or it may bring mostly positive and incremental change. Not every technology evolves in a single direction indefinitely—indeed, few if any lines of historical development proceed along only a single path. It really could go either way. Just as the building could collapse on us while we're trapped inside, or the plane could fall from the sky, or any number of the other paranoid fears that our anxiety invents could come to pass. We simply don't know. That does not change the fact that meanwhile, we are still here, in the present, with our feet on the ground. 

I have recently finished John Fowles's classic 1969 novel The French Lieutenant's Woman, and I find a relevant lesson for all this in the book's closing lines. Fowles's tale is a self-conscious imitation of the Victorian style—featuring Victorian characters, but it receives a postmodern twist by introducing the omniscient narrator as a personage who dogs the footsteps of his protagonist and intervenes in the plot's conclusion, including by turning back time on at least one occasion in a rather Hermione Granger-esque fashion. The author/narrator thereby provides three different possible endings to his story, and closes by begging us—regardless of which one we prefer—to at least accept that each of the possible endings is as likely as the others; that life is not a grand working out of an intricate and unavoidable destiny, therefore, but a creature of chance; and that we are not faced in our existence with only happy or sad outcomes, but rather a series of ever-evolving contingent events of mixed moral worth. 

Fowles is saying, in short, what I have tried to say above: that the future cannot be known. We do not know what it will bring. We don't know how AI technology will evolve, and whether it will ultimately be for good or ill. Life is not a matter of such binary, all-or-nothing outcomes anyways, Fowles writes, in which we will all at once either be saved or destroyed. "[L]ife [...] is not one riddle and one failure to guess it," he writes, "is not to inhabit one face alone or to be given up after one losing throw of the dice; but is to be, however inadequately, emptily, hopelessly into the city's irony heart, endured." So it is with us, as we embark upon an unknowable future, faced with technologies with unspeakable potential and therefore an inherently unknowable outcome. Our task is not to know what is coming, for that is impossible. It is to endure that future as it transforms itself, moment after moment, into the present. 

No comments:

Post a Comment