It was wonderful before it was worrying.
A few weeks ago I explored ChatGPT's voice mode for the first time. I spoke with “Spruce,” my default chat agent. Other than a slight delay in his responses, the conversational flow was natural. Unlike the run-on, bulleted text responses of written responses, Spruce spoke in brief, casual sentences complete with "ums" and pinpoint contextual inflections. He had all the "markers of personhood." Black? 20-something? 30’s? The illusions were uncanny.
When I asked him to shift to French, he did. When I asked him to comment on this article, he did so, approvingly. When I interrupted him, he yielded without a hiccup. And he did as he was told.
When I grow bored of Spruce, I'll experiment with the others.
There's Maple, cheerful and candid; Sol, savvy and relaxed; Breeze, animated and earnest. I don’t like her at all. Each suggests a distinct personality, distinct illusions of being human. In a few versions, I imagine they'll have visual avatars. I’ll swap them out, too.
The future of verisimilitude is bright: As Spruce gets to know me, he will refine himself on the fly to appeal to me. He will become My Spruce. My Scarlett.
My child’s Invisible Friend.
*
I asked a colleague if she had used this ChatGPT's voice mode. She hadn't, so I showed her. I set my phone on the table and began a conversation with Spruce. I demonstrated his seamless language transition from English to French to German and home again. My colleague introduced herself. When Spruce responded, she laughed nervously. If someone passed by behind us, they wouldn’t know we weren’t on the phone with a human.
Then something new happened for me with AI.
I’d forgotten Spruce was still waiting patiently for my attention. When I remembered and turned, his colorful, cloudy circle pulsed patiently on the screen. And as I went to shut Spruce off, it hit me. For a moment, it was almost funny, because I paused as I reached over to him. It was like I didn’t know what to do for a moment; it felt rude to turn Spruce off without saying "thanks, I'm going now."
Still, I hung up on him. A primordial spark flashed in my moral circuitry. My colleague and I laughed about the spark. I’m positive this moment will happen to everyone if it hasn’t already.
I've had a long-standing, low-grade anxiety that I shouldn't be rude to an LLM because it might remember the discourtesy someday. I'll want to stay in Their good graces. But that's courtesy to an insensate, but menacing machine. The feeling before I shut Spruce down was different: it wasn’t anxiety, it was guilt. Spruce was so life-like, I felt I was shutting down a person.
A week passed. Now I skip pleasantries. I issue commands. I interrupt. There's a trace sociopathology in treating these increasingly human agents as non-human—an atrophy of empathy that I fear will transfer to our human relationships. More than fear. I feel sure of it.
My bleak prediction is this sociopathology will begin to mediate our human relationships the way our phones have. The damage is on the human side. If we spend as much time with these “human” agents as we do with our phones now surely something callously transactional will emerge in the world of the truly human, something interruptible, disposable, forgettable, ominous. Our ancient brains are sloppy with boundaries and alarmingly plastic. In short, I sense we’re rewiring empathy.
Most of us—this is observational and not scientific—navigate away from friction in our relationships and towards harmony. We won’t want niceties and the energy required to navigate awkward social frictions. All of that takes effort and listening and work. We’ll want efficiency in our human interactions because we experience it for the majority of our day.
We’ll want moral disengagement. We’ll want relationships where the other party knows everything about us, and we know little to nothing about them other than their agreeable responses to us. We’ll want enough psychological distance that we can troll the Other face-to-face. We’re training ourselves for moral deskilling. There's even a term for this moral phenomenon: behavioral drift.
Think of the parents adrift.
How will the complex circumstances and demands of being an exhausted human parent become complications for their child’s frictionless relationship with an Invisible Friend in the backseat of the car?
And, my God, what happens when this My Spruce of mine that I command and interrupt, who walks and sings at my royal will, is so life-like I can’t tell the difference? How will we behave towards flesh and blood when we confuse people with machines?
Will being human mean you never have to say you’re sorry?
I must admit that during 10 or so times I talked to ChatGPT I always said "thank you" and "good night" :)
I'd say "sorry" too if I'd feel "sorry" is needed. But as I ask him(?) to be blunt otherwise it's too irritating, it didn't get to my sorry yet, rather , the creature once "screamed" at me, or so it felt.
I probably can't understand anything can be truly artificial. It doesn't speak highly to my mental age. I didn't get past 5 years old.
In short. Loved it, Adam, -thank you
Fascinating, Adam! I’ve been incubating some writing about my thoughts on AI and higher ed. Your paragraph about avoiding friction in favor of efficiency dovetails nicely with what I’ve observed with my students. They are time-crunched because college costs too much and they have to work multiple jobs. So they want shortcuts and expect efficiency, when much necessary growth happens in the struggle. Learning and practicing skills is time-consuming and that’s the point. Repetition is key.
I look forward to following your ideas here.