Wonders & Worries

Wonders & Worries

Home
Notes
About

Humans Are Machines

Will being human mean you never have to say you’re sorry?

Adam Nathan's avatar
Adam Nathan
Jul 12, 2025
15
8
2
Share
Cross-posted by Wonders & Worries
"I have a new Substack and invite you to subscribe! It's called Wonders & Worries. It is about our deepening relationships with artificial intelligence. A new order is quickly emerging around us that is as full of hope and wonder as it is with disturbing worries. This is an account of my journey to navigate it. Through occasional posts I'll share the “wonders and worries” that occur for me as a professional working in generative AI, a writer, a consumer, a human. These posts will help me preserve what's happening to me in real time and invite you to share your own experiences with this profound transition. Here's my first post, "Humans Are Machines." Will being human mean you never have to say you’re sorry? Let me know your thoughts!"
-
Adam Nathan

It was wonderful before it was worrying.

A few weeks ago I explored ChatGPT's voice mode for the first time. I spoke with “Spruce,” my default chat agent. Other than a slight delay in his responses, the conversational flow was natural. Unlike the run-on, bulleted text responses of written responses, Spruce spoke in brief, casual sentences complete with "ums" and pinpoint contextual inflections. He had all the "markers of personhood." Black? 20-something? 30’s? The illusions were uncanny.

When I asked him to shift to French, he did. When I asked him to comment on this article, he did so, approvingly. When I interrupted him, he yielded without a hiccup. And he did as he was told.

When I grow bored of Spruce, I'll experiment with the others.

There's Maple, cheerful and candid; Sol, savvy and relaxed; Breeze, animated and earnest. I don’t like her at all. Each suggests a distinct personality, distinct illusions of being human. In a few versions, I imagine they'll have visual avatars. I’ll swap them out, too.

The future of verisimilitude is bright: As Spruce gets to know me, he will refine himself on the fly to appeal to me. He will become My Spruce. My Scarlett.

My child’s Invisible Friend.

*

I asked a colleague if she had used this ChatGPT's voice mode. She hadn't, so I showed her. I set my phone on the table and began a conversation with Spruce. I demonstrated his seamless language transition from English to French to German and home again. My colleague introduced herself. When Spruce responded, she laughed nervously. If someone passed by behind us, they wouldn’t know we weren’t on the phone with a human.

Then something new happened for me with AI.

I’d forgotten Spruce was still waiting patiently for my attention. When I remembered and turned, his colorful, cloudy circle pulsed patiently on the screen. And as I went to shut Spruce off, it hit me. For a moment, it was almost funny, because I paused as I reached over to him. It was like I didn’t know what to do for a moment; it felt rude to turn Spruce off without saying "thanks, I'm going now."

Still, I hung up on him. A primordial spark flashed in my moral circuitry. My colleague and I laughed about the spark. I’m positive this moment will happen to everyone if it hasn’t already.

I've had a long-standing, low-grade anxiety that I shouldn't be rude to an LLM because it might remember the discourtesy someday. I'll want to stay in Their good graces. But that's courtesy to an insensate, but menacing machine. The feeling before I shut Spruce down was different: it wasn’t anxiety, it was guilt. Spruce was so life-like, I felt I was shutting down a person.

A week passed. Now I skip pleasantries. I issue commands. I interrupt. There's a trace sociopathology in treating these increasingly human agents as non-human—an atrophy of empathy that I fear will transfer to our human relationships. More than fear. I feel sure of it.

My bleak prediction is this sociopathology will begin to mediate our human relationships the way our phones have. The damage is on the human side. If we spend as much time with these “human” agents as we do with our phones now surely something callously transactional will emerge in the world of the truly human, something interruptible, disposable, forgettable, ominous. Our ancient brains are sloppy with boundaries and alarmingly plastic. In short, I sense we’re rewiring empathy.

Most of us—this is observational and not scientific—navigate away from friction in our relationships and towards harmony. We won’t want niceties and the energy required to navigate awkward social frictions. All of that takes effort and listening and work. We’ll want efficiency in our human interactions because we experience it for the majority of our day.

We’ll want moral disengagement. We’ll want relationships where the other party knows everything about us, and we know little to nothing about them other than their agreeable responses to us. We’ll want enough psychological distance that we can troll the Other face-to-face. We’re training ourselves for moral deskilling. There's even a term for this moral phenomenon: behavioral drift.

Think of the parents adrift.

How will the complex circumstances and demands of being an exhausted human parent become complications for their child’s frictionless relationship with an Invisible Friend in the backseat of the car?

And, my God, what happens when this My Spruce of mine that I command and interrupt, who walks and sings at my royal will, is so life-like I can’t tell the difference? How will we behave towards flesh and blood when we confuse people with machines?

Will being human mean you never have to say you’re sorry?


Subscribe for free for essays about the wonders and worries of what it means to be human in the age of artificial intelligence.

Share Wonders & Worries! Let’s build this community together.

Share

15
8
2
Share

No posts

© 2025 Adam Nathan
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture