In Robert Stone's 1974 novel Dog Soldiers, the cynical failed writer John Converse describes how he has come to make a dishonest living by practicing the dark arts of his authorial trade. His boss, a former Trotskyist-turned-tabloid tycoon, he tells us, operates a press that prints knock-off versions of mainstream magazines. Inside the covers, these publications are full of lurid tales of crime, sex, and violence that are invented out of whole cloth. Converse's task as a "reporter" for these pseudo-magazines is to find photos and make up the bogus stories to go with them. For this purpose, he relies entirely on pictures of the deceased, cropped from other sources, because of a loophole in copyright law that ensures the dead have no right to protect their likenesses from misrepresentation.
This example from an old novel suggests, perhaps, that visual "deepfakes" are not unique to the digital age. But the capacity of AI to generate plausible visages and scenery for episodes that never actually took place, and for people who never existed, has undoubtedly accelerated the problem. In recent weeks, news stories have warned of the speculative dangers of politicians using AI to generate photographic disinformation about their rivals. They featured a few early real-world examples: there were the invented photos of Trump's arrest, for instance, which were created as a means to underline the dangers of AI disinformation, but which were subsequently disseminated on social media without any disclaimer. There were bogus photos of Pope Francis in a puffy designer jacket. And so forth.