In Friedrich Dürrenmatt's 1961 play, The Physicists, a gifted scientific researcher stumbles upon a secret to the universe so profound and transformative that it threatens to destroy the world. "New and inconceivable forces would be unleashed, making possible a technological advance that would transcend the wildest flights of fantasy if my findings were to fall into the hands of mankind," he observes. (Kirkup translation throughout.) Elsewhere he reflects that, if he had published his researches, "the consequences would have been [...] the breakdown of the economic structure of our society."
In order to prevent this fate, the scientist takes an extreme measure to ensure his results will never be taken seriously. He tells everyone that his findings were dictated to him by the disembodied spirit of King Solomon, returned from the grave, who also fills his head with morbid sci-fi poetry that constitutes a new set of "Psalms" for the space age. In short, he enacts the role of mental patient, even pretending to suffer from the delusion that his true identity is Möbius (thereby joining the ranks of the other "patients" in the play—each with secrets of his own—who pretend to be Einstein and Sir Isaac Newton.)
These steps allow him to live in obscurity, where he can seemingly complete his researches into the fundamental secrets of the universe without having to fear he will accidentally bestow upon humankind thereby the ability to destroy the world. This, he insists, is the only responsible course. "[T]oday it's the duty of genius to remain unrecognized," he declares. And the result is that he is able to have it both ways: he can perform his research, but also escape the ultimate consequences of his findings. We can be "physicists," he urges the other characters on stage, "but innocent."
Dürrenmatt wrote his modern parable with the dangers of atomic weapons and nuclear war foremost in mind. But these days, returning to the play and reading it from the perspective of 2023, one cannot help but think about the presently-unfolding AI revolution. Not, of course, that nuclear weapons are any less of a threat now than they were in 1961 (or '62, when the play was first staged). Arguably, the risk of nuclear annihilation is greater now than it has been at just about anytime since then (such, at least, is Daniel Ellsberg's contention in a haunting recent interview).
But the human imagination demands ever-new dangers and anxieties to populate its apocalyptic imagination (yeah, we say to the more familiar bugbears, but what have you done to threaten my existence lately?). And AI has come to fill that place in the contemporary mind. Even as Putin warns of using tactical nuclear weapons in Ukraine, atomic bombs are still not the first things that come to mind today, when we hear the playwright's phrases about an unspeakable technological transformation. Instead, we think of the robot minds that have developed the ability to replicate human speech and writing.
Today's AI researchers, of course, have clearly not made Möbius's choice to keep their findings hidden and to adopt the pose of mental patient, in order to avoid anyone using their research for evil purposes. Nonetheless, they do have all of the physicist's fears of their own emerging catastrophic powers (AI researchers are typically among the loudest voices warning against the apocalyptic menace posed by the technology; or, as David Wallace-Wells observed recently: "A.I. is being built by people who think it might destroy us.") They also share his fantasy of philanthropic innocence.
As a recent profile of OpenAI co-founder Sam Altman in the New York Times explores, many of the architects of the current AI revolution have characterized themselves as "rationalists" and "effective altruists" (interpreting these slogans with the specific meaning they have acquired in Silicon Valley circles). They know they are developing a technology which they think has a decent chance of wiping out humanity, but they believe—perversely—that this confers upon them an even more binding commission to be the first to create these machines, so that they can do so "responsibly."
The Times article portrays Altman—beneath a relatively polished exterior—as a true believer in the same mode. He too somewhat glibly and flippantly contemplates the possible destruction of humanity or total disruption of economic life at the hands of the technology he is building; but he also believes he has the power to save us. For one thing, the OpenAI contract supposedly has an escape clause (giving that term a new meaning for all humanity)—if things truly go wrong, they have the right to shut down their research and its findings, Möbius-style.
Altman also maintains that, even if the new technology proves impossible to contain, it could as easily prove the economic salvation of humankind as its ruin. He maintains that profits beyond a certain level from the disruptive new technology will be placed into a nonprofit trust earmarked for the good of humanity, and would be paid out in lump sums to humankind at large if the AI drives them all out of work. How much comfort, though, does anyone take in this prospect? Is anyone anxious to sign up to survive on the bestowed largesse of the Silicon Valley overlords? Talk about a road to serfdom!
The picture that emerges from listening to this cohort is that our lives are in the hands of a group of intensely young, emotionally immature people who—through a quirk of fate—have been invested with titanic, world-altering powers on a scale possessed by few others in history. (So great is this power, indeed, that even the creators of the new AI technology are drawn to the nuclear analogy to find an apt comparison—Altman in the Times profile describes OpenAI as a new Manhattan Project, and likens himself to the father of the atom bomb, Robert Oppenheimer.)
In Dürrenmatt's play, let it be noted as a warning, Möbius's project of escape from the consequences of his discoveries ultimately collapses. He is not able to remain, at last, a "physicist, but innocent." At the risk of spoiling the play's conclusion, it quickly transpires that his key findings were copied down by the genuinely-unhinged director of his psychiatric institution, who immediately undertakes to establish an industrial cartel to profit from the findings. The potential material rewards of the new technology prove too great to resist, even if they may mean the destruction of humanity.
One can't help but feel that the new AI researchers' fantasies of innocence and philanthropic disinterest are equally illusory. Even if OpenAI's contract contains an escape clause, there's nothing to prevent other actors from using its findings for nefarious ends. And one doesn't get the impression that the creators of this new technology are so immune to the profit motive that they will selflessly sustain humankind indefinitely once they drive us out of work. They seem to be doing quite well for themselves, and recall that mega-scammer Sam Bankman-Fried was also among the self-declared "effective altruists."
This character study, of course, leaves to one side the question of whether our apocalyptic fears and anxieties about the new technology are actually justified. I tend to think they are overstated. Most of the scenarios involving a world-ending super-intelligence depend, by admission of their proponents, not only on extrapolations from current AI language models, but on the assumption that several further transformations of this technology on a similar scale will take place over the next few generations, which seems to me clearly in the realm of the possible but unknowable.
With the state of our knowledge of the future at this point, therefore—seldom reliable in human history—we can say that future AI technologies might develop the capacity to destroy us all; but then again, they might not. Whether they will develop power at such a scale is certainly contestable (no technological progress in any domain proceeds along a consistent exponential curve). From where we stand, the AI-destroys-us-all scenario seems about as likely as the AI-saves-us-all-from-some-other-existential-risk scenario. It therefore is so uncertain it can provide little guidance for action.
AI is therefore one of these unknowns of life we may just have to live with—up there with the super-volcano under Yellowstone, microplastics in our blood stream, the danger of an asteroid collision, and nuclear war. Maybe it will be a big problem; maybe not. But so long as the future development of AI technology could prove to be a world-saving thing as equally as a world-ending thing, and the dice therefore seem about evenly weighted on both sides, it's very hard to know whether or not to make the throw. What if not developing this technology proved the thing to destroy humanity?
Let us not forget, though, that the possibility of global annihilation is not the only thing that haunts Möbius, in Dürrenmatt's play, about the dangers of the new technology he is developing. He also fears the possibility that it will cause "the breakdown of the economic structure of our society." This likewise seems a risk of the new AI technology, and one that may be a bit more immediate than the specter of super intelligence. Cade Metz asks, for instance, in the Altman profile: "if a machine [...] could do anything the human brain could do[,] would [it] eventually drive the price of human labor to zero[?]"
Even if this fear is overstated; even if—like other waves of automation past—AI somehow creates more jobs than it eliminates in the long run—the pain and disruption of the transformation will be immense. And the people with the power to force this change on a laggardly humanity are frighteningly blasé about its effects. I think again of a favored line from Robert Frost, with which the theme of Dürrenmatt's play would perhaps accord: "None should be as ingenious as he could,/Not if I had my say. Bounds should be set/To ingenuity for being so cruel/ In bringing change unheralded[.]"
No comments:
Post a Comment