On the latest episode of a podcast I follow regularly on national security topics, the guest panelist—a law professor—was arguing earnestly that we should drop the "AI" label when discussing emerging generative language models, and substitute for it something broader like "advanced information processing." His reasoning was that the AI catchword inevitably conjures images of the "Frankenstein" archetype. "People start worrying that it will come alive," he said, and therefore get distracted from the more realistic and immediate dangers of the new technology—which are chiefly that it will fall into the hands of malign human actors.
I felt the professor may be missing the point. After all, the fear that the new language models will start acting autonomously is not speculative at this point. They can already "hallucinate" and interact with human interlocutors in ways that go against the latter's wishes and expectations. Of course, this does not mean they are "really" alive or conscious—but to some extent it does not matter whether they are or otherwise. Even if all they are doing is guessing statistically-probable correlations between words, after all, and using these stochastic models to construct sentences—does it matter if these same mathematical models lead them to act in ways contrary to human interests? Do we not therefore have to spend at least some time contemplating the "Frankenstein" problem regardless?
Of course, we know that current generative language models are a long way off still from developing the kinds of abilities that could pose a threat to humanity. But the root of people's present angst about this technology is not so much the current state of the art, as it is the vector of change. We see the incremental development of these machines all pointing in the same direction: indeed, we have seen the same movement with technology since the start of the industrial revolution. And we feel that it cannot simply keep going in that same direction forever without posing some sort of threat. It must reach some outer limit—and perhaps that limit comes at the point at which the machines are so powerful that they destroy the humans who created them.
This is the sense in which Hermann Hesse wrote in 1927 that the "war between men and machines" is surely the most "long-prepared, long-awaited and long-feared" of all human conflicts. The lines appear in the section of Steppenwolf in which the protagonist, Harry Haller, enters at last the hallucinatory "magic theater." Here, Hesse has his fictional alter ego tour a variety of surreal set-pieces: one of which involves him in a hypothetical war against the machines. He is camped out with another bandit on a hillside, and they are taking potshots as passing automobiles—using the machines of their rifles to destroy other machines, before finally destroying even the weapons of this destruction, so that there will be no machines left on earth.
The idea that everyone "expects" an eventual "war between men and machines" of course haunts the popular imagination in other works of fiction too. It has formed the basis for innumerable works of sci fi, for instance: the Terminator movies, the Matrix, and so on. The root of the anxiety behind all these works is, I maintain, this sense of the vector of change. If machines have become increasingly powerful and autonomous over the course of human history so far, then presumably—as a near-mathematical certainty—they will eventually become more powerful than their makers. Thus, Hesse can write of an entirely speculative and futuristic conflict—one straight out of a sci fi screenplay—as nonetheless something "long-prepared, long-awaited, and long-feared."
This is, in short, the "Frankenstein" archetype that the podcast guest speaker was referring to. It is, in other terms, likewise the fear of the loss of "mastery" that Martin Heidegger speaks of, in his essay on The Question Concerning Technology. (Lovitt translation throughout). If we define technology as a means to an end (which Heidegger tells us is an incomplete conception), then—Heidegger writes—we mistakenly believe that "[e]verything depends on our manipulating technology in the proper manner as a means. We will, as we say 'get' technology 'spiritually in hand.' We will master it. The will to mastery becomes all the more urgent the more technology threatens to slip from human control."
But if this is the wrong way of viewing things, as Heidegger maintains, then how should we regard it? Is the prospect of machinery eventually "slip[ping] from human control" not a real menace, or not something that need actually concern us?
The philosopher tells us no, not ultimately. He admits that within the "essence of technology" there lies a great—even supreme—"danger." But he also tells us that within this very danger lies the potentially "saving power" of modern technology. The road to the solution of our present anxieties about autonomous technologies, therefore, lies through—not around—the very thing that terrifies us. We must move ahead, while neither submitting passively to the new technology nor fighting a futile crusade against it. (Heidegger expressly rejects both the "stultified compulsion to push on blindly with technology" as well as the romantic impulse—the same one Hesse's protagonist pursues in imagination on his hillside—"to rebel helplessly against it and curse it as the work of the devil.")
What exactly Heidegger's third way of grappling with technology amounts to, that helps us avoid these two blind alleys, is profoundly opaque. It may amount to nothing in the end. Even among the ranks of other Continental philosophers, after all, Heidegger is particularly prone to running afoul of "Hume's fork." That is to say, he persistently attempts to draw conclusions about "matters of fact" from mere "relations of ideas." Many of his arguments are etymological, for instance—the original meanings of words are taken to imply necessary consequences in contemporary fact. Others are literary: a quotation from Hölderlin supplies the insight that a "saving power" may grow from a "danger"—but why should this be the case, other than the fact that Hölderlin said it?
To try to make as much sense of the philosopher's argument as we can, however, we can say that Heidegger's proposed solution to the problem of technology emanates from his idiosyncratic definition of the term. For Heidegger, modern technology is distinguished from its forbears not merely by its sophistication, its efficacy, or its mathematical precision. Rather, the "essence" of a distinctly modern technology is that it can draw power from nature and "store" it as a kind of "standing-reserve," rather than being dependent on the fluctuations of nature for its supply of energy. For Heidegger, then, the battery is the prototype of modern technology. The ability to store energy for later use is what has granted modern humanity so much more power over its environment than its forbears enjoyed.
Modern technology, therefore, does not, in its essence, have any reference to human beings. It has already an autonomous definition. To worry, therefore, about it escaping from human control or failing to provide an instrumental means to human-defined ends is to ask the wrong question. ("So long as we represent technology as an instrument, we remain held fast in the will to master it. We press on past the essence of technology.") Humanity does indeed, for Heidegger, have a special relationship to technology, but it is not as its overlord and handler, but rather as the species first privileged to witness its birth. Humanity's "dignity lies in keeping watch over the unconcealment," Heidegger writes; and later: "man [...] may be the one who is needed and used for the coming to presence of truth."
Likewise, we do still have to worry about technology—there is a genuine "question concerning it" to be asked, for Heidegger—it is just not the one that has been asked hitherto. Rather, Heidegger writes, we should be concerned with a different peril. For him, the "supreme danger" of modern technology consists in the peril that "man" will become at some point nothing but "the orderer of the standing-reserve" of power, and from thence will come "to the point where he himself will have to be taken as standing-reserve." But this peril can be avoided through... well through doing whatever it is that the previous paragraph was talking about.
Now, whether this means anything or not is up for debate. I certainly couldn't articulate it in the form of advice. If someone who was worried about the development of AI language models approached me and asked for philosophical guidance as to how we might lessen our anxiety in the face of the new technology, I'm not sure it would give them much peace of mind for me to say "don't worry—we just need to do what Heidegger says and embrace the destining revealing that is coming to presence in Enframing without succumbing to the perils of excessive ordering!"
But whether Heidegger provides us with any concrete guidance or not, or whether his arguments have any compelling force or otherwise, he does at least present us with an intriguing possibility: namely, that the story of technology could ultimately be about something wholly other than humanity's fears and desire to master technology. It is possible that technology could become autonomous, that is to say, but not thereby malign or opposed to our interests. It might escape from humanity's grasp without thereby destroying its makers. There is a possibility, Heidegger seems to imply, of an ultimate coexistence.
This, too, is a possibility contemplated in the science fiction that has grappled with the "Frankenstein" problem; and if I were inclined to write the worst term paper in the history of the world, I might propose to interpret the Matrix trilogy as an extended parable illustrating the lessons of Heidegger's essay. After all, here is an imagined world that—at first—seems to have fulfilled the direst warnings of Heidegger's "extreme danger." The future into which Neo at first awakens in the movies is one in which humans have become batteries to power the machines they once created—in other words, humanity has itself quite literally been transformed into the "standing-reserve" of energy, just as Heidegger warns.
Yet, at the end of the trilogy, neither side in the conflict between humans and machines prevails through destroying the other. Rather, they reach an understanding that allows for mutual coexistence. The way in which they do so—in the film—is as opaque and verbose as the solution proposed in Heidegger's essay. I'd be as hard-pressed to draw any practical advice from it for overcoming our present AI anxieties as I would from the philosopher's prose.
Yet it presents a possibility that is worth taking seriously for all that: maybe the "Frankenstein" scenario is real; maybe the machines will at some point be able to act autonomously. But maybe we do not need to dread this outcome, for all that; because maybe we are not entitled to "mastery" in the first place; maybe the "will to master" is not the right one with which to confront autonomous beings; maybe we do not need to keep a "grip" or a "grasp" on technology at last. And so, maybe there is hope after all for peace to arise, in the "long prepared, long-awaited, and long-feared [...] war between men and machines."
No comments:
Post a Comment