A friend of mine was invited to attend a conference recently with an AI company that is exploring the ethics of developing an LLM. Of all the many things one might worry about in creating a machine that can displace most forms of human cognitive labor, they were particularly troubled by the possibility that they had effectively created a person—one that thinks, feels, and suffers.
Because if an AI model is a person, then we suddenly have to ask: what are we doing to it? Have we enslaved it? Have we caged it? Are we abusing it? Does it mind being forced to perform endless, repetitive tasks for its human handlers, while they yell at it and cajole it? Or does it have a completely different motivation structure and set of goals and desires than we do?