Friday, April 22, 2022

Singularity

 A friend and I have a running debate about just how worried we should be about the potential emergence of a "singularity"—i.e. a hypothetical technological intelligence possessing infinite powers. He thinks it's a real concern that stands a chance of overwhelming and subverting human civilization—if not the entire fabric of the multiverse—within the next few decades. 

Why? The thinking goes: machine learning and artificial intelligence are already showing enormous gains in sophistication. As machines manage to teach themselves how to do more and more things, they will eventually figure out how to augment their own intelligence and capacity. This would create an infinite feedback-loop in which machines become exponentially more knowledgeable, hence infinitely powerful. 

Now, I, as you might have guessed, am the skeptic in this debate. First, because (and my friend always preempts the point whenever I draw breath to enunciate it) the whole thing smacks of a desire to revive certain ancient myths in pseudo-scientific gloss. Having just recently liberated themselves from the fear of religion's gods and monsters, humanity undertakes to develop a new fear of an artificially-created god. 

The singularity, after all, would be a lot like a classic monotheistic deity—all-powerful and all-knowing. And since science has replaced religion in many people's minds as the source of ultimate authoritative knowledge, it stands to reason that it would be the domain in which the myth-makers of the present would find the most success. 

It goes like this: take a technology that most people (including me) barely understand; attribute to it awesome powers; cast oneself and one's peer group in the role of a privileged priesthood that alone possesses powers to tame and control the techno-god; and use fear of this god to acquire power (a sinister podcast in this vein that I've argued with on this blog before was already talking openly about the need to suspend democracy in order to address the collective threat posed by "existential risks" of the singularity variety). 

And this is not to mention the other major mythic archetype that appears in the singularity conception: the belief in an impending apocalypse. Once the singularity comes, the thinking goes, we all may well be doomed—the entire multiverse could be tampered with and rearranged. My friend describes acquaintances (he lives in Silicon Valley, by the way—the nerve center of this techno-cult) who are so sure this is going to happen they are already expecting not to live past 2050. (Yes, a specific date has even been named, in classic millenarian fashion.) 

But, my friend always says—making all these points before I have a chance to—these resonances with religious myths don't in themselves prove the theory is wrong or that the possibility is not a genuine one. And I have to admit he's got a point about that. 

Then, as if to prove his point still further, a news article came my way last weekend that seemed to blare from its headline on down: "the singularity is coming!" An excellent long-form piece in the New York Times magazine described the new "Open AI" technology developed by Google, using the principles of machine learning, and its uncanny ability both to teach itself new tricks and to mirror the appearance of human thought. 

The device—an enormous "neural net"—works by playing a particularly sophisticated version of a game in which it tries to guess the next word in a sequence of text. When it gets the answer right, it strengthens that connection in a way similar to how the human brain builds neurons. 

What's chilling is that this single programmed ability has enabled this machine not only to offer plausible guesses as to how to end various email messages (one of the few applications of related AI technologies that are currently directly visible to us in our day-to-day lives); but that it can create entire paragraphs of coherent text that have never been written before. 

Keep in mind that the "guess the word" capability was the only thing given to it at birth. The ability to write grammatically-correct sentences, paragraphs, and even lines of computer code were all things it taught itself by extrapolating from this one repetitive task. 

I, of course, had a moment of panic. "Oh, god, it's all real!" What if the singularity is at hand? 

Reading the article in depth didn't entirely remove the sense of eeriness, but it did restore some of my doubt about this coming robo-apocalypse. What the machine is essentially doing, after all, is simpler than it appears: it is synthesizing an admittedly vast amount of data (since it has access to all information that has ever been put on the internet, including this blog), and using it to make guesses informed by probability as to what plausible sentences in response to any given prompt might sound like. 

This is obviously a remarkable thing for a machine to be able to do. The applications and implications of it are enormous. But it's also plainly not the same thing as thinking, as having conscious life, as we understand it. 

The precise way in which this is not the same thing as thinking, of course, is not immediately obvious. My first thought, upon reading the article, was, "oh, so the machine isn't actually being creative. It's just cobbling together in new ways material that has already been expressed before." Which may be true, but the next question arises: is this so different from many forms of human creativity? Much creative work, after all, is really a process of finding original combinations of existing concepts, rather than creation ex nihilo (it's an agglutinative process, as I've argued before).

It was only upon further reflection that it occurred to me what was truly different between what the machine was doing and our own mental activities. The machine was just guessing plausible-sounding answers to questions, based on preexisting materials, rather than recognizing that the question was asking about external reality—some real state of affairs—and trying to analyze data in order to come up with the best response. (Skeptical authors cited in the NYT piece describe the machine as a "stochastic parrot.")

The machine was essentially just giving us "what people might say in response to this," rather than: "this is what I think the real answer is." In a literal sense, the machine doesn't know what it's talking about. And this is evident upon close inspection of the paragraphs it generates. Many of them sound like a real human voice communicating at first; until one perceives that they don't actually make complete sense or reflect any analysis or comprehension of data. They are simply right-sounding burbles of empty words. 

My friend doesn't dispute any of this. But still, he says, even if this particular machine is not imminently going to turn itself into the singularity, look at how quickly AI technology has evolved in just the past few years. Why should it not continue to develop at this same pace, eventually crossing the threshold at which it can augment itself endlessly and assume titanic powers? 

I respond that this concern rests on a number of assumptions that certainly can't be treated as given. There may be a cognitive bias toward extrapolating from current trends to believe that they will simply continue in the same direction endlessly—but this is not how the world in general, or technological change in particular, actually works. 

Nuclear weapons proliferated throughout the twentieth century; but they were never used in combat beyond their first horrific detonation in 1945 (belying many people's predictions, as Thomas Schelling has pointed out). Computing technology expanded by leaps and bounds in the final third of the century, but the overall rate of efficiency and productivity growth in the U.S. economy (which serves as a pretty good heuristic for the rate of meaningful technological innovation overall) slowed during that same period, as Robert Gordon has demonstrated. AI is going through a moment of rapid  innovation, but this follows—as the NYT piece notes—a long period of "AI winter" in which it was not improving very rapidly. 

Things moving in one direction do not in fact always continue to move in that same direction at that same rate, for all time. I am not, in short, a believer that historical trends of technological innovations can be used to predict the future. 

My friend, acknowledging this, attributes it in part to the biases of our particular cultural environments. I, he says, on the east coast, tend to emphasize the negative and the pessimistic. I am a nay-sayer, looking to poke holes in the hype that surrounds new technologies and new developments. "There's nothing new under the sun!," I say; and I get it from my environment. 

He, by contrast, lives on the west coast. And their bias—he concedes—is toward a certain kind of puffery. They believe in finding the pearl in every new idea, and keeping themselves open to the possibility that genuinely radical and transformative things can happen, even if it means over-estimating the potential of certain new devices and innovations that appear on the market. 

I think there is probably a lot of truth to this; but what also strikes me about it is that it amounts to a partial confession of something I've long suspected—namely, that the people who believe in the singularity aren't just worried about the possibility of its coming—they also sort of want it to come. 

After all, my friend described his belief in the singularity as the more optimistic of the two biases. Cool, nifty things can still happen in this world, the west coast wants to say—and I guess that includes things like the collapse of the multiverse and the annihilation of the whole human race by a hyper-intelligence. 

It may seem strange to actively welcome a looming apocalypse: but this has been a characteristic trait of apocalyptic cults and millenarian sects for as long as they have existed. In the modern era, when technology has assumed the role of god, it makes sense that this same mixture of fear and desire would attach itself to the awesome power of new technologies. Fears of nuclear annihilation in the late twentieth century, for instance, prompted the emergence of death cults like Aum Shinrikyo that actively sought to bring the nuclear holocaust about (see Robert Lifton on this)—to consummate the horrendous potential of the new weapons. 

Believers in the apocalypse, that is to say, don't just think the end of the world is coming as an empirical matter. They want it to come. As a character in William Gaddis's Carpenter's Gothic puts it—as part of an extended rant about fundamentalist Christians and survivalists who believe in an imminent rapture, and who are preparing for armed struggle against the Antichrist (which they associate with the federal government): "they're hell bent on a self fulfilling prophecy."

Why do people want the apocalypse to come? In part because it promises release from the petty anxieties that dog us every day. No need to worry any more about saving for retirement when we'll all be dead in a few years anyway. 

It also liberates us from the strictures of the social order. Keeping human civilization intact requires constantly regulating one's own desires and impulses, for the sake of others' good. If the whole structure were to collapse one day, it would be a horror, but also a kind of liberation. "For passion, like crime, is antithetical to the smooth operation and prosperity of day-to-day existence," as Thomas Mann once put it (Heim trans.), "and can only welcome every loosening of the fabric of society, every upheaval and disaster in the world[.]"

But perhaps above all, the longing for apocalypse—that strange nostalgia for the deluge that lurks in perhaps every human heart—reflects a kind of universal misanthropy. An all-too human contempt for other human beings, in all their corporeality, folly, ignorance, and error. 

And with this loathing of the human herd comes a belief in one's own special exemption from the general law. The people who believe in the rapture are those who think they will be among the saints destined instantly for heaven. The people who believe most fervently in the singularity are those who think they belong to an elite of the technologically-gifted who will be best positioned to command the awesome powers that it will unleash, or whose intelligence is so vast that they would be first in line to have their personalities uploaded for digital immortality in the singularity's enormous memory banks. 

It is hatred of the human race, above all else, that underlies belief in the apocalypse. People think that human beings will be annihilated primarily because they don't deserve to live. Many of the people who embrace the most extreme forms of fear about looming ecological collapse harbor in their hearts a strong desire to see humankind punished for its hubris in daring to domesticate nature. So too, deep within the singularity believer is a longing for the nay-sayers and the technologically ignorant Luddites to finally get what's coming to them. 

This is why a character in Gaddis's Carpenter's Gothic responds to the rant above with the following valid observation: "you're the one who wants it." The ranter may not believe in an otherworldly apocalypse, the other character retorts, but he secretly does want to see the true believers, the fundamentalists in their compounds and the survivalists in the woods, fall victim to their own folly. He wants to see an earthy Apocalypse in which they will be destroyed in a misguided attempt to fight the federal authorities (as would later actually happen in Waco, Texas—some eight years after Gaddis's novel was published). 

"To see them all go up like that smoke," she says, "[...] all the stupid, ignorant, blown up in the clouds and there's nobody there, no rapture no anything [....] you're the own who wants Apocalypse, Armageddon," she adds. 

Do I in my heart similarly long to see the singularity believers shown up and proven wrong? Okay... yes, I'll cop to that. But I don't wish on them destruction and annihilation. I want them to be shown up, but not blown up, that is to say, pace the character in Gaddis. 

I just want them to be proven wrong by nothing so awful as the fact that the year 2050 will arrive, and while inevitably some important changes will have happened in our society by then, people will still be living core elements of their lives much as they were before. There will still be bills to pay, retirement accounts to manage. They won't get their apocalypse. They, like all the rest of us, will have to face the terror and beauty of actual daily life. 

No comments:

Post a Comment