The past few days I've felt another wave of AI-related melancholia wash over me. Here I was thinking I had fully processed and already put aside my feelings on the subject. In one post after another the last few months, after all, I have catalogued my thoughts and reactions to the dawning of the AI age, doing my best to maintain perspective on the potential dangers of the new technology without taking at face value every extreme claim made on behalf of its purportedly revolutionary powers.
No sooner had I put these thoughts behind me, however, than the makers of the current frontrunner of the AI race announce another, vastly more sophisticated version of the same tech; and I am back to reassessing all of my assumptions and evaluations all over again. The same fears return: is this the end of an era in human life?; was 2022 in some sense the last normal year we'll ever enjoy—and if so, should I have appreciated it more?; will I still be able to derive meaning from my life when many of the activities I value most might prove replaceable by machines?
Dark and lonely thoughts; but a friend made sure I saw a recent column by Ezra Klein that at least proves I am not alone in these fears. Klein wrestles, he admits, with his own skepticism of the most grandiose claims made for the new technology. He acknowledges that, in the presence of AI researchers and their VC funders, he feels as the rest of us often do in the same company: that these people are inhabiting another reality entirely. To many of us, that makes them seem like deluded sufferers from a Messiah complex. They often appear that way to me. But Klein—without dismissing this judgment—asks us to suspend it for a moment in order to take seriously a more disturbing possibility: what if they are right? What if AI really does mean the end of the world as we have known it?
In a book that influenced me greatly in my twenties, the psychiatrist Robert Jay Lifton discusses the similar questions that beset him about nuclear weapons, in the closing years of the twentieth century. Examining the recent history of doomsday cults like the Japanese extremist sect, Aum Shinrikyo, Lifton notes that in the past, there was a clear line of rationality separating the worldviews of apocalyptic/millenarian cults from those of non-believers. The impulse toward apocalypticism, Lifton argues, is founded in a fear of mortality, and a resulting conflation of the impending death of the self (and the universe of consciousness each self contains) with the annihilation and extinction of all humanity.
What's troubling about the nuclear age, Lifton argues, is that it has become increasingly difficult to maintain this distinction. Starting in 1945, after all, humanity began to possess the technological means to annihilate itself. Thus, the worship of the apocalypse was no longer purely an exercise in myth. People with sufficient destructive will could actually bring about the death of humankind. It had become possible—for the first time in history—for someone who had conflated their personal mortality with the impending destruction of all the world to actually realize these dreadful possibilities through their own action—all they would need are enough atomic weapons, and the world's nuclear arsenals already possessed (and still possess today) sufficient explosive power to accomplish this ultimate extermination many times over.
In the face of yet another potentially destructive and phenomenally powerful technology—AI—we once again face an erosion of boundaries between the "reasonable" and the "paranoid" worldview. The belief of many AI researchers that they are unleashing a superhuman technology with the potential to transform—and possibly destroy—the world as we know it may seem to us the grandiose delusions of a micro-cult of self-important technologists, drunk with their own sense of destiny. Surely, they are being the "paranoid" ones, and we who doubt them are the reasonable and dispassionate observers. Yet, to Lifton's point, a sufficiently destructive technology can erase these comfortable distinctions. There are human creations of the modern age of almost unlimited destructive potential, and the possibility of human beings one day eliminating themselves through their own creations cannot be dismissed as a paranoid fantasy.
Those of us who have endured some form of severe anxiety or depression in our lives—and it is probably the majority of humankind that has done so—have perhaps learned to survive these afflictions by distrusting the most paranoid construction our minds can place on future events. We learn to ignore the voice of fear, that of the worst case scenario, because it is so often proven wrong. We may have been afraid of heights, say; or we may have been beset at one point in our lives by the image of a vast structure we inhabit suddenly giving way beneath us or crashing down on our heads. We learn through experience that these terrifying visions seldom come to pass; but more important than the statistical improbabilities of the matter (which we never doubted, even at our most anxious), we realize that these are visions of an unknowable future, and therefore can never be proved or disproved. Meanwhile, we are still here in the present; no building has crashed down on us. So we learn to subsist here in the moment, and not to listen to the paranoia.
Yet here is Ezra Klein suggesting we should start taking that paranoid voice more seriously again. No doubt these heuristics we have learned to dismiss most of our worst case scenario hypothesizing work for 99% of the dangers we face in daily living—but what if, Klein asks, AI really does belong that that 1% of outliers—the technological dangers that are every bit as menacing as they are cracked up to be? We have Lifton's example of nuclear weapons before us to remind ourselves that the distinction between irrational personal fears and rational assessments of real-world dangers cannot always neatly be drawn. The psychotherapist can tell the paranoiac that their individual death will not mean the death of the human species; but they cannot honestly tell them that the death of the human species is not a possibility within their lifetime. The existence of world-ending technologies therefore poses a source of fear that cannot be argued away with cognitive behavioral methods.
If this is the case—and we are, with Klein, merely opening ourselves to the possibility, not endorsing it as inevitable truth—then the next question naturally arises: why are people building these machines? Klein asks this of the AI researchers, many of whom—he records—rate the possibility of the new technology wiping out humankind as somewhere in the range of 10% or higher. Why, he asks them, would they devote their lives to creating something they believe has a one in ten chance of destroying humanity?
The answer he gets is the plea of inevitability. In another form, it really amounts to the same rationale people always provide for the maintenance of a nuclear arsenal: if we don't have one, someone else will. The creation and proliferation of this technology is inevitable. Would we not rather exercise some control over the direction it takes; would we not trust ourselves more to be the ones to develop this technology, than someone else; therefore do we not have an obligation to build it first, since no force can possibly arrest its onward development?
As effective as this argument may be as a salve to the conscience, however, it is not obvious it is true. I'm not sure it's really the case that the development of technology in a particular direction can never be halted or slowed. The innovation of new technology is one particular manifestation of the human will to power, and—as Robert Frost once wrote in a poem that deserves to be more widely known, in our modern age of automation—there is no reason why it couldn't be curbed within certain limits, as other manifestations of the will to power have been by the forces of civilization: "Political ambition has been taught,/By being punished back, it is not free:" writes Frost. "It must at some point gracefully refrain./Greed has been taught a little abnegation/ [... So too,] None should be as ingenious as he could,/Not if I had my say. Bounds should be set/To ingenuity for being so cruel/ In bringing change unheralded[.]"
One hears more than a little of Frost's quixotic plea on behalf of stasis in the second of two proposals Klein forwards in the closing appeal of his column: "One of two things must happen," Klein writes. "Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies." Something, that is, like a non-proliferation treaty for AI. Which—if we take seriously at all the comparison we have made above to the destructive potential of nuclear weapons—is in no way an absurd suggestion.
No comments:
Post a Comment