I have done my share of handwringing over the past year about the potential threat of AI deepfakes to our political environment—and even to our ability to rely on a sense of shared reality. In a recent post I cited a Wall Street Journal article with the alarming headline "Is Anything Still True? On the Internet, No One Knows Anymore." Last spring, I quoted Walter Lippmann's prescient warnings about the crisis for democracy that could arise in a world where we no longer distinguish truth from falsehood. The basic concern behind all these posts is simple: generative AI has made falsified images appear more realistic than ever before. Soon, we will be unable to detect any difference between the real ones and the fakes. And without a shared epistemic baseline, how will we be able to negotiate any of our political disagreements without resort to violence?
A recent New Yorker article by Daniel Immerwahr, however, makes a cogent case that these fears are overblown. AI deepfakes are indeed concerning and often harmful, Immerwahr concedes—but not primarily because they have actually proved effective in misleading people. These images have been used to harass, sexually humiliate, and reinforce people's already-blinkered opinions; but they have not actually spawned the kind of epistemic crisis that I and so many other observers have feared, even as they have become increasingly lifelike and sophisticated. The reason, Immerwahr argues, is that convincing fakes are not actually a new phenomenon. It has been possible to sow disinformation across various mass media for centuries. The way in which most of us have always tried to disaggregate which of these claims are true and which are false is not actually by "seeing it with our own eyes," but by checking what we consider to be reliable sources—a process Immerwahr dubs "social verification."