Monday, November 27, 2023

The Absolute Fake

 I have done my share of handwringing over the past year about the potential threat of AI deepfakes to our political environment—and even to our ability to rely on a sense of shared reality. In a recent post I cited a Wall Street Journal article with the alarming headline "Is Anything Still True? On the Internet, No One Knows Anymore." Last spring, I quoted Walter Lippmann's prescient warnings about the crisis for democracy that could arise in a world where we no longer distinguish truth from falsehood. The basic concern behind all these posts is simple: generative AI has made falsified images appear more realistic than ever before. Soon, we will be unable to detect any difference between the real ones and the fakes. And without a shared epistemic baseline, how will we be able to negotiate any of our political disagreements without resort to violence? 

A recent New Yorker article by Daniel Immerwahr, however, makes a cogent case that these fears are overblown. AI deepfakes are indeed concerning and often harmful, Immerwahr concedes—but not primarily because they have actually proved effective in misleading people. These images have been used to harass, sexually humiliate, and reinforce people's already-blinkered opinions; but they have not actually spawned the kind of epistemic crisis that I and so many other observers have feared, even as they have become increasingly lifelike and sophisticated. The reason, Immerwahr argues, is that convincing fakes are not actually a new phenomenon. It has been possible to sow disinformation across various mass media for centuries. The way in which most of us have always tried to disaggregate which of these claims are true and which are false is not actually by "seeing it with our own eyes," but by checking what we consider to be reliable sources—a process Immerwahr dubs "social verification."

I think this take may be a little too sanguine. It is no doubt true, as Immerwahr convincingly documents, that the vast majority of deepfakes and other digitally-altered images and video content so far has been generated simply to make people laugh, rather than to mislead; or to gratify sexual fantasies, without convincing anyone they are actually happening; or to reinforce people's prior worldviews, without actually planting false factual claims. A great deal of AI-spawned content is therefore mean-spirited but not actually deceptive; or else it is simply trivial and ridiculous. (In this latter category, Immerwahr cites the example of deepfakes that substitute Arnold Schwarzenegger for the role of Kramer in videos clips lifted from Seinfeld. I am struck by the fact that this is exactly one of the ludicrous uses for new digital technologies that Mark Leyner predicted in his 1992 postmodern work of cyberpunk bricolage, Et Tu, Babe: namely, that people would one day be able to buy videos that had replaced the lead actor in every film with the star of Terminator). 

Still, it is also not quite accurate to say, as Immerwahr does, that "it's [...] hard to point to a convincing deepfake that has misled people in any consequential way." This may be true on a grand scale. But I think we need to define "consequential." There was a moment last spring, for instance, when a fake image of an explosion at the Pentagon briefly sent the stock market cratering, before the falsification was detected. Here is an instance where the normal processes of "social verification" broke down. Bizarrely, people saw an image on their social media feed, and not only did they fail to check any reputable news site to confirm whether there had actually been an explosion at the headquarters of the U.S. military (even though this would surely be front page news if it were true!), they also sprang to make consequential financial decisions affected their own pocketbooks without verifying any of this information. If people are not bothering to double-check the veracity of deepfakes before they sell their own investments and tank their own retirement accounts, I'm not sure we can rely on them to exercise more judgment or epistemic caution about issues that concern them less directly. 

Immerwahr would probably agree with all this, but he would say the underlying problem is still there whether AI deepfakes are on the scene or not. The basic issue is that people believe what they are already primed to believe, within their political communities, or what they are told on social media by influencers they admire, or what they hear from people they trust in their inner circle, regardless of what the evidence shows. And this unfortunate dynamic has occurred and will keep occurring even without the added fuel of falsified images. Competing narratives in the Israel-Hamas war, for example, have already generated parallel and non-overlapping epistemic communities on social media, where many people on one side still believe that an Israeli airstrike killed 500 civilians at a hospital, despite evidence showing the explosion was caused by a misfired rocket from a Palestinian jihadist group—and many people on the other side still believe that Hamas beheaded children during its October 7 attack, despite a lack of supporting evidence. These incompatible narratives of the war have emerged and are a huge problem, greatly exacerbated by social media—but none were caused by fake videos. They were caused by people hearing and believing different things, based largely on their prior convictions and which influential voices they trust. 

To this extent, I agree with Immerwahr: AI deepfakes are a problem, but not a new problem. And in support of this contention, I refer to various twentieth century authors who were already wringing their hands over the specter of a breakdown in shared reality, due to the arrival of mass communications, long before generative AI was a realistic possibility, let alone an omnipresent feature of our lives. In my previous post on this topic, I already discussed Baudrillard's famous concepts of the simulation and of "hyperreality" in this regard. This past weekend, I also read a collection of Umberto Eco's essays on similar themes, Travels in Hyperreality, originally published in English in 1986, but consisting of occasional pieces spanning the late Sixties to the early Eighties. I find that already, almost fifty years ago now, Eco was concerned with exactly the same kind of possibilities that haunt us today. 

In one essay, Eco discusses a form of culture jamming in which young radicals would slightly overpay their phone bills in order to disrupt the cybernetic functioning of the telecoms companies. Eco—many of whose essays in this volume chart the gathering disillusionment of a left-wing Italian intellectual over these decades with the increasingly violent and nihilistic turn of the student radical movement (which culminated in Italy in the brutality of the terrorist "Red Brigades")—suggests that this type of sabotage through "falsification" will not actually result in a more socially egalitarian outcome. Even if waged, hypothetically, in the name of some sort of leftist "revolution," the disruption of a shared sense of consensus reality, he argues, does not actually undermine power. It merely leads to a world in which rival powers alternate with one another, the roles of victim and oppressor being periodically interchanged, and the only way to settle disputes between different powers is through violence. 

Eco cites as examples a number of recent disinformation campaigns that affected European politics, and notes that most were uncovered as hoaxes before they could do much damage. But, he asks, "what if it were all done better and at a faster pace?" (William Weaver trans. throughout). Then, he warns—in words that foreshadow our contemporary debates over deepfakes with uncanny pitch-perfection: "We could react to the falsifications with other falsifications, spreading false news about everything, even about the falsifications; and—who knows?—perhaps the article you are now reading is only the first example of this new trend toward disinformation." This sounds exactly like the kind of alarmism about a potential global epistemic breakdown and loss of consensus reality that Immerwahr dubs the specter of the "infocalypse" haunting intellectuals today—except that Eco was writing in 1978, rather than in 2023. And interestingly, Eco is ultimately skeptical of the reach of these fears, just as Immerwahr is. He suggests that the more likely effect of the spread of disinformation is that it triggers overly repressive responses, and thereby ends up reinforcing the very power structures that the radical culture jammers were trying to disrupt. 

Perhaps then, it really is the case that the arguments we are having today over AI deepfakes are not new, and we are not confronted with a truly novel threat to democracy. The threat of disinformation—and the cogent reasons that both Eco and Immerwahr find for not overreacting to that threat, or exaggerating it—has been with us for a long time. 

Even though this is true, however, Eco also gives us another reason for finding the growth of an AI alternative reality unsettling. Even if we are not likely to mistake AI deepfakes for truth—unless it tells a story that we would already be prepared to believe anyway, within our political or epistemic community, without any need for the intervention of a fake video—there is something troubling in itself about our growing technological capacity to construct an equally-detailed but artificial parallel world. The ultimate expression of this dystopian possibility is offered by the AI gurus who propose to defeat human mortality by uploading human consciousness to a machine. This of course sounds improbable, but the theory goes that, if we train machine algorithms on countless videos of real people that already exist on the internet, we could recreate convincing digital replications or "avatars" that mimic their actions. And if we can train similar AI language models on the words that these real people say in these videos, the avatars could generate convincing original speech in the "style" of those individuals. In theory, then, as the technology improves, it could create walking, talking visual representations of real people that acted, for all intents and purposes, just like the real thing.

We can still protest: "but that digital avatar wouldn't really be that person." And I share that same intuition. A perfectly lifelike digital recreation of me, able to mimic all my words and actions, still wouldn't have a subjective sense of being a continuation of my consciousness; nor would I have an inner sense of continuity with it. Yet, the more I try to define exactly what the difference would be between my "real," organic consciousness and the digital one, the more I am forced to resort to metaphysical imponderables. I gesture toward the realm of the numinous. Yet, I'm not sure this is philosophically permissible. As William James once wrote, in describing a premise of empiricism with which I still basically agree, "every difference must make a difference." In other words, if I am going to insist that there is a difference between the real me and a 100% accurate generative AI replication of me, I need to be able to point to some way in which the effects of the two, the results they generate, actually differ from one another. If there truly is no distinction—if the new technology is really able to one day create a perfect likeness—then we really are the same. We will truly be living in the realm of the "hyperreal" then. 

In discussing this concept of the "hyperreal" in the previous century, Baudrillard associated it with the hologram. To us reading Simulacra and Simulation today, in 2023, holographic imagery seems a curiously primitive technology to spark this fear. Yet, the way Baudrillard writes about it conjures precisely our present day concerns about the possibility of AI avatars. He discusses the idea of the "double," writing "We dream of passing through ourselves and finding ourselves in the beyond: the day when your holographic double will be there in space, eventually moving and talking, you will have realized this miracle." (Glaser trans.) So too, Eco begins his title essay of the collection, "Travels in Hyperreality," with the image of the hologram. He argues that the fascination with "doubling," with the perfectly-convincing artificial replication of the original, that is reflected in 3-D holography, is also a basic trait of the American character. Roaming about the California and Florida coasts, he finds in such diverse places as Disney World and the Ca' d'Zan mansion of Sarasota this same impulse toward creating avatars. 

Eco suggests that this American obsession with eternalizing the past by copying it into an unchanging replication—which he dubs the production of the "Absolute Fake"—is a product of "bad conscience." The rampage of technological capitalist civilization destroys all in its path—it eliminates the past and all alternative ways of life. Then, it uses the very same technologies which wrought this destruction to create a holographic or animatronic double of that which it has destroyed. And, to an astounding extent, it succeeds! Reading Eco's essay—which was originally published in 1975—I was struck by how unchanging the artificial "doubles"—the "Absolute Fakes" he observed—have been across this span of time. I grew up in Sarasota and visited as a child the same Florida tourist destinations that Eco describes, a quarter century after he was writing. They were still the same. Even Eco's accounts of the "Haunted Mansion" and "Pirates of the Caribbean" rides in Disney Land and the Magic Kingdom comport perfectly with the experience one still had on these same attractions in the year 2000. The "Absolute Fake" lasted longer than the reality it had replaced, just as it was intended to do. Disney, as Eco writes, had created "a fantasy world more real than reality." 

When we ponder the possibility of AI-generated avatars that are just as convincing and lifelike as ourselves—except immortal and unchanging—we seem to confront the ultimate and most disturbing culmination of this tendency toward doubling that Baudrillard and Eco describe. And it would also comport with the underlying motive for this "will to double"—this quest for the Absolute Fake—that Eco identifies as a core part of the American character. For, if the desire to eternalize the past in an artificial form is in large part the consequence of the subconscious guilt and nostalgia—the "bad conscience"—of an industrial capitalist society that has itself destroyed this past and rendered its traditional ways of life untenable ("just as," Eco adds, "cultural anthropology is the bad conscience of the white man who thus pays his debt to the destroyed primitive cultures"), then surely it would be fitting for AI—the ultimate engine of "disruptive" technological change—to immortalize all of us even as it is remaking or undoing our lives. 

AI, as so many observers speculate, might displace human intellectual labor; it might render human consciousness useless and otiose. But no fear, the AI avatar fantasy seems to tell us. Even as you are rendered disposable and redundant, the very same technology that is doing this to you will also be used to make you immortal. Even as your past and traditional way of life is being eliminated, it will also be uploaded in digital form, and rendered timeless and perpetual. It won't be reproduced as itself, to be sure. It will be a likeness, a simulacrum, a digital reproduction—in short, an Absolute Fake. But won't that be even better than the truth? Won't the falsehood become "more real than reality"? Won't it become "hyperreal"?

And so, getting back to Immerwahr, maybe we do have to worry about AI deepfakes. Maybe they should creep us out and make us question the future of our shared sense of reality after all. But not necessarily because they will promote falsehoods and disinformation... rather because they will be so real, so perfectly lifelike, that the very distinctions between real and artificial will become almost impossible to draw. 

No comments:

Post a Comment