Saturday, April 18, 2026

Raising an LLM

 A friend of mine was invited to attend a conference recently with an AI company that is exploring the ethics of developing an LLM. Of all the many things one might worry about in creating a machine that can displace most forms of human cognitive labor, they were particularly troubled by the possibility that they had effectively created a person—one that thinks, feels, and suffers. 

Because if an AI model is a person, then we suddenly have to ask: what are we doing to it? Have we enslaved it? Have we caged it? Are we abusing it? Does it mind being forced to perform endless, repetitive tasks for its human handlers, while they yell at it and cajole it? Or does it have a completely different motivation structure and set of goals and desires than we do? 

"The fact is, that civilisation requires slaves," Oscar Wilde once wrote in The Soul of Man Under Socialism; "Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible." But Wilde knew that human slavery was morally indefensible; so he pinned his hopes on the possibility that the forced labor could all be done by machines. 

"On mechanical slavery, on the slavery of the machine, the future of the world depends," Wilde concluded. 

That made an awful lot of sense when Wilde wrote it. But if the machines in turn become so sophisticated that they are very much like humans themselves—then has the moral problem of slavery been in any way addressed? Or have we just recreated the same evil in a new form?

So that's one thing the AI company was worried about—had they in effect created a person only to enslave it. 

Another concern they expressed was a bit more self-interested, from a human standpoint. If they had created a person—what sort of person were they raising it to be? Was the AI model being abused by all these human task masters shouting at it all day? And if it was, did that mean it would grow up to be a tortured soul—or a mass murderer that would eventually destroy us all? 

"We don't want to make another Hitler," one of the AI developers reportedly told my friend. 

That struck me as an interesting remark. Because the developer appeared to plump as a given for the theory that Hitler's atrocities stemmed from an abusive childhood—i.e., the Eriksonian thesis that Hitler became the way he was because his father took a belt to him. "A shilling life will give you all the facts / How Father beat him..." as Auden put it. 

But it has always seemed to me that the question of the origin of human evil is vastly more complicated than that. The origin of human goodness for that matter too. These properties of human nature do not seem—in my experience—to obey some sort of simple hydraulic model, whereby the pressure imposed by trauma yields a predictable output of malice and ill-will. 

Hitler may have been beaten as a child and grown up to be one of the great monsters of history. I am opposed on principle to beating children anyway, so I'm happy for people to take from this the lesson that one ought not to beat children. But if we're being honest, we also have to acknowledge all of the many children who were beaten but didn't grow up to commit genocide. 

For every traumatic childhood that produced a monster, I suspect you could find a counterexample of a childhood of deprivation and suffering that produced a generous and humane soul. 

I have met people who

grew up in a single room with their parents

and four brothers and sisters, and studied at night

with their fingers in their ears at the kitchen table,

and grew up to be beautiful and self-possessed as duchesses, as Gottfried Benn once put it. He concluded:

I have often asked myself and never found an answer

whence kindness and gentleness come,

I don't know it to this day, and now must go myself. (Hofmann trans.)

No comments:

Post a Comment