Wednesday, May 24, 2023

"First Thoughts"

 It can't be a good sign that everyone talking about generative AI language models—not least the people making them—keeps comparing the new technology to the nuclear bomb. Yet the analogy seems inescapable. Sam Altman, a co-founder of the venture responsible for ChatGPT—OpenAI—has likened the initiative directly to the Manhattan Project. Geoffrey Hinton, dubbed by reporters the "Godfather of AI," was fond of quoting Robert Oppenheimer. And when Altman appeared before Congress last week, alongside other tech executives in the AI field, his call for global regulation of the emerging industry had only a single logical parallel on the current world stage, which he name-checked explicitly: the International Atomic Energy Agency

It is an odd spectacle to see the same people who are actively working to develop the new language models implicitly warning us they have the power to be the next potentially world-ending technology, by analogy to nuclear weapons; but this kind of dual role is not unprecedented. The very progenitors of the nuclear technology, whom the AI developers like to cite, were of similarly divided minds about the value of their own creation. Many of the scientists most directly involved in building atomic weaponry went on to become advocates for disarmament and non-proliferation. Altman and his peers seem to be aiming for a similar rehabilitation; yet on an accelerated schedule: since they are still actively working to build and profit from the devices even as they warn against their use. 

One does not know what to make of this. How are we to judge them? Are they hypocrites? Are there not perhaps some mixed motives, other than sheer public-spiritedness, behind their calls for international regulation (a non-proliferation treaty, after all, would significant reduce the field of potential competitors)? The AI developers would of course say no, to all this. They would protest that they knew all the time as they were working on the new technology that it could prove dangerous in the wrong hands—indeed, that they were among the first to warn against its risks—and that they chose to develop it precisely so that they could ensure they were the first to do so. If they didn't build it, someone else surely would. So all the more reason to create it now, and try to install a humane set of values in the hardware. 

Yet all these same arguments could be made—and were made—on behalf of the developers of atomic weaponry too. Cormac McCarthy reviews several of these lines of defense in his recent pair of novels, published late last year—The Passenger and Stella Maris. The novels center on a pair of siblings whose father was among the scientists at Trinity, who built the bomb. The novels therefore deal with apocalyptic themes and the moral responsibility of scientists who create technologies with grave destructive potential. The specific apocalypse that the novels' characters have in mind is the threat of nuclear armageddon (the twinned novels are set in 1980 and 1972, respectively); but they treat these themes in a way that feels eerily relevant to our present moment of high anxiety over AI as well. 

In the course of the second volume in the series, Stella Maris, a counselor in a psychiatric ward asks the younger of the siblings, Alicia, how she assesses her father's moral responsibility for the bomb. For the most part, she absolves him, or at least reduces his blameworthiness to an irrelevancy. She says, as today's AI researchers do, that the bomb was an inevitability. Someone was going to build it. And, her father no doubt thought, "better us than them." But she also notes that her father was deeply skeptical of those other nuclear scientists, like Oppenheimer, who made a great display of their own moral ambivalence or spoke out about their own misgivings of further nuclear development. He distrusted those scientists who had suddenly developed "second thoughts," says Alicia. He "said that they should have thought of that sooner. They should have had first thoughts." 

Such too, I'm afraid, is how we must come down in judgement on our new generation of would-be Oppenheimers, testifying before Congress to offer advice on how to save us from the very thing that they are building, and have no intention to stop building. Their "second thoughts," to the extent they have them, may in the end be of about as much value to us as the moral agony of the Trinity scientists was for the people of Hiroshima and Nagasaki. Perhaps all scientists ought to have first thoughts, that is, before, as Robert Frost put it, "bringing change unheralded." 

No comments:

Post a Comment