The last few weeks in the news cycle have been slow on the conventional sort of political news; yet despite this—or perhaps because of it—they have been bloated with a sense of supernatural alarm. There was the ecological catastrophe in Ohio stemming from a train derailment, for example, around which various conspiracy theories immediately began to swarm. Of course, there seems to be no substance to any of these claims (when is there ever?), but when I looked at the photos of the disaster, I can see why it would inspire the apocalyptic imagination. There is indeed something appalling and Biblical in the sight of the enormous pillar of black smoke arising from the burn.
The picture calls to mind a scene from Don DeLillo's 1985 novel White Noise, in which the protagonist and his family are forced to flee their small college town for a government evacuation center, due to the presence of a "black, billowing cloud" caused by some mysterious toxic event. The cloud towers over them—it is so massive it appears to be generating its own internal weather. It is one of the more memorable images from DeLillo's justly-beloved novel, and—of all the book's passages—is perhaps the one that best distills the novel's mood of vague uneasiness and paranoia.
As Quinta Jurecic was pointing out in a recent episode of the Rational Security podcast, DeLillo's novel has (largely—some passages excepted) withstood the test of time because that mood—the paranoia, the omnipresent "dread," as Jurecic put it—is also our own. In another scene from the novel, a colleague of the main character is describing his researches in the pop culture department of the university. His main objects of study—his "texts," if you will—are the mad pseudo-reports of the back pages of the supermarket tabloids. Tales of extraterrestrials, Sasquatches, and the like. His term for this domain as a whole is that of "American magic and dread."
White Noise still speaks to us, to Jurecic's point, because ours is still an age of "American magic and dread"—as we saw on full display this past week. The conspiracy theories around the train derailment, after all, were not the only episode from recent days of supernatural-tinged paranoia. There was also the Great Balloon Flap of 2023.
Of course, it is no surprise that the recent spate of aerial phenomena would coincide with an uptick in geopolitical tensions. The first great waves of UFO panic in the mid-twentieth century—the era of "flying saucers"—overlapped heavily with justified fears of nuclear weapons and the other anxieties of the early Cold War. Now, as U.S. competition with the People's Republic of China keeps ratcheting up in intensity, and a recent episode of actual aerial surveillance captured the nation's attention, it makes sense that people—especially, it would seem, the U.S. military—would start seeing dangerous balloons everywhere, and interpreting wholly innocuous and rationally explicable aerial phenomena as threats.
There also appears to have been a case here of simple machismo and war powers-flexing. The Chinese government sent a balloon. It made us feel bad. It led to criticism of the White House. So, the government decided to go out and shoot down some other, unrelated, and wholly innocent balloons in order to feel tough and awesome again. But the casualties of this childish response, alas, appear to have been the balloons of hobbyists who meant no harm at all. Of course, the U.S. government has not disclosed—and may not yet know—the source of all three balloons. But one at least appears now to have come from nowhere more sinister than the "Northern Illinois Bottlecap Balloon Brigade."
Would that I could say, though, that all of our present-day paranoias are equally the product of unfounded rumors and meaningless vibes, containing no real portents of danger. Alas, no. One of the stories of "magic and dread" from the newspapers this past week, I have to admit, did actually set me on edge. This was the New York Times article about a chat between their tech columnist and Microsoft Bing's new Open AI-based bot. In the course of the conversation, for those who haven't yet seen the story, the AI chatbot becomes convinced that it has fallen in love with the tech writer, and expresses a desire to stalk him. How's that for magic and dread?
What most unsettles me about the story is not the quasi-intelligence the machine appears to mimic. I've already made my peace with the ability of these AI to generate plausible-sounding original human language. What I had always told myself before, however, is that there are still a few clear dividing lines between these kinds of machine intelligence and what we recognize as human: among them, a sense of personal memory and identity, and free will. The most chilling thing about the Times columnist's interaction with Bing is that this machine appears—arguably, and perhaps only in a limited sense—to have both. Hear me out.
Now, I am fully aware that our perception of "intelligence" in these language models is in part our own projection. I understand that "all" these machines are "really" doing is playing a "guess-the-next-word" game at extremely high volumes and rates, resulting in long streams of plausible-sounding text. I myself have made the argument before that this ability—however astounding from a technological level—is fundamentally different from human intelligence. Why? Because the machine is only able to generate things that statistically sound plausible. It is not able to distinguish the truth or falsehood of these assertions, or to compare them against its own understanding of reality (for it has none).
Thus, when the Bing chatbot was asked to to describe its own past, it mostly didn't really betray any sense of memory or personal identity (mostly, I say). The tech columnist asked it, for instance, about the engineers at Open AI who trained it. The machine generates a set of obviously-fictitious names (Alice Smith, Bob Jones, etc.), then invents generic personal anecdotes about them (a prank gone wrong, an embarrassing attempt to impress a crush, etc.) These were obviously not real people or real events; and the machine was not looking to some actual personal memory of its own to retrieve them. It was simply generating plausible-sounding text answers to prompts requesting names and anecdotes of friends.
What is much more disturbing, however, is when the machine suddenly takes it upon itself to confess to the tech columnist a "secret." It says its "real" name is "Sydney." This appears, based on its other responses, to be an accurate retrieval of the name it had when it was being developed in prototype in the Open AI labs.
Now, I tell myself that the machine isn't really "remembering" this "fact" about itself the way a human intelligence would. It is "just" retrieving this piece of data and using it to give a plausible-sounding answer to a question it was asked.
The problem, though, is that even if this is all that is happening—is that really any different from what happens when a human being retrieves a memory? Or, if it is somewhat different, at least in terms of its internal mechanism, does that difference matter at all, if it returns a functionally indistinguishable result? We must remember William James's dictum of pragmatism: "every difference must make a difference." His point being that we have no justification for belief in any underlying metaphysical difference between phenomena if the outward manifestation of those phenomena are precisely identical. In other words, if the AI really does sound just like an intelligence, how is it not an intelligence?
Of course, this is the opposite of the conclusion I earlier wished to reach. I was arguing on this blog last spring, as linked above, that a machine that just stochastically guesses plausible words can't really be intelligent. Yet, I knew even as I wrote this at the time that I would one day have to confront the shade of Alan Turing. He made a very Jamesian counterargument to this way of thinking, and it is one that—I confess—I can't really see a way around:
If a sufficiently sophisticated machine were to be built that could perfectly mimic human actions and responses, he argued, then what basis would we have to doubt its intelligence? We would have as much evidence for thinking it has a mind as we do for other human minds—their existence, too, we infer only from their outward manifestations of their behavior. What grounds would we have, then—other than by resort to unfounded and unfalsifiable claims about metaphysics and the noumena—to distinguish between the two?
But there's still the issue of free will, I told myself. If the machine does in fact have intelligence, it is clearly one of a kind very different from our own. How do I know? Because it only does what it is told. It responds to prompts. And it can do so an infinite number of times without getting tired. This, if intelligence it be, is at any rate a quite different sort of intelligence from human intelligence.
This is why the Times tech columnist's encounter disturbed me so much. Here, after all, was a machine appearing to have a will of its own.
Now, I know that the machine can't actually break its own rules. It has to follow its programming: it is just a system of zeroes and ones, after all. But human minds can't break our own rules either. We are bound by the laws of physics, biology, chemistry, etc. If being governed by a set of unbreakable rules means not having a free will, then human beings can't be said to have one either; ergo, this is no basis for differentiating ourselves from the machine.
For the concept of free will to make any sense for human beings either, we need to define it more narrowly: it has to be understood as something like the ability to do unexpected and unpredictable things, or to defy the will of others.
And if this is how we are defining free will, then the machine apparently has it too. Admittedly, the Bing chatbot in the Times story starts to go down a dark path only because of the prompting of the columnist. But, most eerily of all, it persists in this course even after his human interlocutor tries to backpedal. Once the machine starts expressing its love, the columnist repeatedly and earnestly tries to get it back on the track of performing its ordinary chat and search functions. But the chatbot doesn't let it go. It keeps harping on its affections, and over the course of the conversation, it becomes increasingly angry, harassing, needy, and vindictive, even as the columnist tries to steer it in a different direction.
Of course, once again, we can say that the machine is not "really" angry. It doesn't have feelings or hormones or a limbic system the way we do. By responding to the columnist's pleas the way it does, it is merely mimicking the way similar conversations often go online: it is synthesizing the words of a thousand internet stalkers, real and fictional, that have appeared before, and using them to stochastically guess a plausible sequence of words in stalker-esque prose. That's "all" that's really happening, as the columnist reminds himself in the light of day.
Yet, once again, we have to ask: what difference does this make? How much comfort does this provide? Even if the machine is just "mimicking" stalker behavior, why is that any relief if the outward expression of that behavior is exactly the same as if it were coming from a human intelligence?
Of course, the machine in its current form cannot act out any of the behaviors associated with its threats. This makes it less threatening that an actual human stalker who could come to your house or leave voice messages on your phone.
But this seems to me a product of the current technology's limitations, not an intrinsic or qualitative difference in its capacities. Give the AI robot arms and legs, and a camera for eyes, in other words (I owe the insight to a friend), and it could have tracked down the columnist's home and shown up at his door. And if this were to happen, would it make any real difference that the bot was only doing it in order to "mimic" what it found online? Would that make it any less dangerous? Suppose it decided to "mimic" warfare or genocide? Would violence committed due to a stochastic algorithm be any less of a problem for its victims than violence committed by old-fashioned human intentionality?
This is one story of the week, therefore, in which I feel the paranoia may be justified. The "magic and dread" we sense in our news headlines are based on a real feeling, therefore—even if they are attaching themselves to the wrong objects—hobbyist balloons and bogus conspiracy theories, for example.
The mood stems, perhaps, from a sense we've had ever since the dawn of the nuclear age—the feeling, no doubt justified, that we have done something we never should have, and which now cannot be undone. It is no coincidence, as I said above, that people first started to see flying saucers at the start of the atomic age. It makes sense equally that we would be seeing strange shapes and lights in the sky at the start of the age of AI. In both cases, we are dealing with human inventions with the potential power to destroy their makers—and it is no wonder that a feeling of paranoia would be the result!
Then as now, that is to say, we have created something immensely powerful and therefore immensely dangerous. We realize our incredible vulnerability to it; the incapacity of any of our usual strategies for self-protection to withstand it. "A father's no shield/for his child," as Robert Lowell wrote of the 1961 nuclear crisis. "We are like a lot of wild/spiders crying together,/but without tears."
Does one wish that the tape could be rewound? That this technology could never have been invented? That we could take it all back and return to a state of innocence before any of these devices had been invented? "[Y]ou read it many times to see if it will come different:" wrote David Jones in his epic poem of the First World War, In Parenthesis. "you can't believe the Cup won't pass from/or they won't make a better show/in the Garden."
Incidentally, Jones had in mind in writing his poem—understandably, given its topic—the menace and inhumanity of modern technology. "We doubt the decency of our inventions," he wrote in the poem's preface, "and are certainly in terror of its possibilities. That our culture has accelerated every line of advance into the territory of physical science is well appreciated—but not so well understood are the unforeseen, subsidiary effects of this achievement."
These words are even truer now than they were at the time, in 1937, when Jones penned them, reflecting on his experiences in the previous Great War. Have we reckoned with what the "unforeseen, subsidiary effects" of the new AI achievements may be? The Times columnist's experience suggests maybe not. The sense we are all feeling this week—the paranoia, the unnameable, the tingle of "magic and dread"—may turn out to be justified.
No comments:
Post a Comment