Monday, July 1, 2024

Too Late to Be Ambitious?

 In a classic essay on mortality, "Urne-Buriall"—which I also discussed in the previous post—the seventeenth century writer and polymath Sir Thomas Browne at one point makes a striking observation: "'Tis too late to be ambitious," he writes. Why? Because, he argues, the world will imminently be coming to an end. The Biblical prophecies gave the Earth about six thousand years of existence before the apocalypse would descend, he writes. And so, by Browne's reckoning, that puts the likely end of humanity within his own lifetime, or shortly thereafter. 

Browne therefore looks back with something like envy upon the generations that preceded his. They, at least—he reflects—could count on monuments to carry their name for centuries if not millennia to come. The people of Browne's generation, by contrast, could not rely on even this much. In his view, some of the men and women then breathing might live to see the second coming and the resurrection of the flesh—so what was the point of hoping to immortalize our legacies in the memory of generations to come? After all, there would be no such generations to come. 

From the vantage point of today, it's easy to see the absurdity in these fears. More than four centuries have passed since Sir Thomas Browne's birth—and the world still stands on its pillars. Indeed, Browne's posthumous legacy makes his argument seem more ludicrous still. If anyone has managed to perpetuate his name and memory from the seventeenth century down to the present, it is Browne. Here we are, centuries later, still discussing his works and ideas—the very fate that he said was impossible and inaccessible to the men and women of his time. 

No—we think—Browne had no cause to worry. We are the ones who have to worry. After all, we had the misfortune to be born at the dawn of AI. We are the ones who no longer have time on our sides. We are the ones for whom the future cannot promise centuries more of human continuity. We were born just in time to witness the end. Or so some of us think, in our darkest moments. I recall a friend back in 2020 sending me an article about AI development: "No!" he lamented alongside it. "Can't the singularity wait just a little bit longer? I was just finally getting started in my real career!"

My friend, in short, was worried—with Browne—that it was now "too late to be ambitious." What's the point of even starting, he wondered, when the future may be nothingness? What's the point of planning ahead, if all our plans may come to naught? I was recently watching the first-ever video ad generated entirely by AI, for instance. It wasn't exactly good. Human copywriters and videographers could still probably do better. But it was certainly human-like enough to give me the creeps. And if the technology can do this after about two years on the market—how much better will it shortly become? 

Now, even as I share some of my friend's fears, I don't go as far as him. I don't think AI is about to develop super-intelligence and destroy humanity. And indeed, it's quite possible that the impact of generative AI will ultimately be muted. I certainly find very few ways in which I can actually imagine willingly incorporating it into my life or work. Its promised productivity gains have yet to materialize in most industries. But still, there is something about the rapidity with which it evolves and gains function that is unsettling, and does indeed put one in an apocalyptic mood. 

After all: the angst of technological society—the "ache of modernity" of which Thomas Hardy wrote—does not necessarily stem from a belief that technology will destroy us. Indeed, it is possible to still feel this "ache," even when one recognizes that the history of technology up to this point has been largely positive. For all our fears of automation and displacement, each new advance in technology and innovation has somehow, in the end, managed to create enough new jobs to sustain the human population—indeed, to sustain many more of them, and at a much higher standard of living.  

There is every chance that AI will prove to be yet another such positive development, in the end. But still, this does not entirely remove the "ache." For, even if the future may yet turn out to be positive, and the world is not doomed, that future is still frighteningly unknowable. It is the pace of change that rattles us, even if the change could ultimately prove to be toward something better. It is the sense that we are forced to go blindly into the unknowable future as an act of faith, never being able to foresee how this could all work out, that seems like too much to ask of our frail mortal selves. 

I quoted Pierre Teilhard de Chardin on this point in a recent post. Considering our apocalyptic fears of the present—the fear we have taken over from Thomas Browne's era—i.e., that it may already be "too late to be ambitious"—the passage comes back to mind: "[W]hat disconcerts the modern world at its very roots is not being sure, and not seeing how it ever could be sure, that there is an outcome—a suitable outcome—to [its current] evolution," Teilhard de Chardin writes. He adds: "And without the assurance that this tomorrow exists, can we really go on living[?]" (Wall trans.).

It is this, above all else—the inability to foresee "a suitable outcome" to this direction of technological evolution—that causes us so much concern. It is not that the present state of AI is so impressive (indeed, the technology still shows enormous limitations and makes embarrassing mistakes). It is, rather, that we cannot predict where it will develop from here—or, if it continues to develop in the same direction, how that could possibly lead to a future that is still compatible with human employment and the current structure of human society. 

Maybe it will—somehow—just like other generations of technological innovations before it—prove to be compatible with human employment, and even to augment it—making us all more prosperous. But it is deeply unnerving not to be able to tell in advance how this could possibly be the case. It is this inability to foresee a "suitable" or adequate "outcome" to the change that makes us relate to Sir Thomas Browne's sense that it is now "too late to be ambitious." This is what gives us the feeling, as D.G. Rossetti once wrote in a poem: "That the earth falls asunder, being old."

It is possible that to ask people to tolerate this degree of uncertainty is too much of a burden to place on human shoulders. It is possible that it is unfair to ask this of humanity—even if one still admits the possibility that all of this will work out well, and that our present-day apocalyptic fears about generative AI may seem as absurd to future generations four centuries from now as Sir Thomas Browne's fears of the imminent demise of the world seem to us today. It is possible that even admitting all the potential positive "outcomes" of this "evolution," the change and uncertainty required is still too much to bear.

Robert Frost once wrote as follows, in an apt reflection on technological change and automation. And I have to say, for all my reflective anti-Luddite beliefs, part of me nonetheless agrees with him: 

Even while we talk, some chemist at Columbia

Is stealthily contriving wool from jute 

That when let loose upon the grazing world 

Will put ten thousand farmers out of sheep. [...]

None should be as ingenious as he could,

Not if I had my say. Bounds should be set

To ingenuity for being so cruel 

In bringing change unheralded[.]

No comments:

Post a Comment