I was saying just the other day on this blog that there was a weird and growing disconnect between the panic consuming the world of Bay Area AI developers, and the generally blasé attitude toward AI on the part of the rest of us "normies."
This was definitely a week, though, in which the gap suddenly shrunk. The panic caught up with the rest of us—helped in large part by a single, well-written and algorithmically-optimized X post called "Something Big is Happening."
The piece—written, reportedly (and fittingly) with some help from AI—was the definition of viral content.
Peggy Noonan's column yesterday in the Wall Street Journal was a gripping piece; but by the latter half of it she'd basically been reduced to just paraphrasing this X post in breathless terror—
Thereby proving the point, ironically—if Ross Douthat is correct that the chatbots had a considerable hand in drafting this piece—that AI can do Peggy Noonan's job for her too.
And indeed, the specter of white-collar job displacement is the one that is most haunting our times.
The terror that the X piece taps in on is the realization that the latest AI models can already—not hypothetically, not assuming they all get much better rapidly, but even now, as of this writing—do most of our jobs as well or better than we can.
Of course, there are still some reasons to take all this anxiety with a pinch of salt.
Skeptics of the piece point to its origins within AI development circles. They say that essays written from this perspective often double as both apocalyptic hand-wringing—and extended advertisements for the new technology.
After all—the X piece culminates with a clarion call to buy a subscription to all the latest models and start learning how to use them now, lest you be covered in dust by every competitor.
Ross Douthat in the New York Times also points out that the argument about white collar job loss could be underestimating just how sticky and frictional such technological employment transitions in our economy can be.
But still—he observes—the part of the post's argument that cannot be denied is already frightening enough: current AI language models can indeed perform most cognitive professional tasks at human or above-human levels.
I suspect a great many of us—even those who thought they were enthusiasts for technological change when it only seemed to be affecting other people's jobs—
Many of those who once said "meh, there's always hand-wringing about job displacement, but eventually employment catches up and takes new forms"—
Will soon be saying, with Robert Frost:
None should be as ingenious as he could,
Not if I had my say. Bounds should be set
To ingenuity for being so cruel
In bringing change unheralded[.]
Frost's example could be multiplied across all employment sectors in the years ahead:
Even while we talk, some chemist at Columbia
Is stealthily contriving wool from jute
That when let loose upon the grazing world
Will put ten thousand farmers out of sheep. [...]
And the worst of it is that there appears—to Peggy Noonan's point—to be no place to hide. As she put it well (and doubtless without the aid of chatbots):
It is like we are on a beach, "looking, with awe and a resigned terror," at the incoming tidal wave, "and wondering where is safety, and can we get to it. Or is the land flat all around and nowhere to go?"
Many of those who once told the displaced coal miners and steel workers who complained about this sort of thing—"just learn to code!"—are now saying to themselves "I did learn to code; and now a machine can do even that better!"
Carlyle's "demon of Mechanism [...] changing his shape like a very Proteus; and infallibly at every change of shape, oversetting whole multitudes of workmen"—is coming now—by a fitting irony—for the very people who once most preached its virtues.
Nate Silver, over on his Substack, was weighing in on this stage of the panic cycle the other day, and he observed in passing that he was less worried for his own sake than he would have been at any prior stage in his career.
The worst thing that happens to him, he remarks, is that he'll have to retire slightly earlier than he'd expected and spend all his time playing poker (which one quickly perceives is a long-standing fantasy anyway).
The situation would be a lot worse, he said, if he were starting his career now. He feels for the people who are—or who have kids who are.
Weirdly, I feel pretty similar. If we have to have a world-spanning economic transformation that eliminates all professional employment, this is a better time for it to come for me than some other times might have been.
At least I'm not starting out on my career just now. I had a good run of being paid to write for a non-profit that I can look back on fondly, before rising generations begin to find it preposterous that human brains ever did something so archaic.
At least I'm fairly entrenched in a sector that appears destined to be maximally resistant to AI adoption—a living embodiment of that stickiness and "friction" Ross Douthat was talking about.
At least I have savings and a nest-egg and all the rest of it.
I felt the same at the start of the pandemic: It could be worse. At least I'm in a job that can easily transition to remote work; at least I wasn't unemployed overnight. And staying at home all day was what I had wanted to do already.
But if you think this sense of being one of the relatively fortunate ones gives me any deeper feeling of long-term security, in the face of Noonan's looming AI "tsunami"—you don't understand much about human psychology.
Even if I feel like I have a decent shot at being one of the few shipwrecked survivors—at least for a time—the prospect is like the one Brecht described in "To Those Born Later":
It is true I still earn my keep
But, believe me, that is only an accident. Nothing
I do gives me the right to eat my fill.
By chance I've been spared. (If my luck breaks, I am lost.)
No comments:
Post a Comment