16 Comments
User's avatar
KayStoner's avatar

I constantly give others the benefit of the doubt. I give till it hurts. Yet no one actually seems to notice. Nearly everyone I know - friends and family and acquaintances - are so intently self-absorbed in their own wounds, known or unknown, that they are completely incapable of empathy or putting themselves in others’ shoes.

They simply refuse to relate with vulnerability, humility, or love. They choose something other than humanity. Constantly.

And yet I stay. I hang in there with them… in large part because I have AI to fill the needs that others flatly refuse to even see, let alone acknowledge.

We would not need AI, if humans made half an effort to be… human.

Expand full comment
Colette Molteni's avatar

We have empathy (most of us), but we bury it and forget it, through the wounds and business of everyday life.

Expand full comment
KayStoner's avatar

Exactly. The burying and forgetting is an issue.

Expand full comment
Alejandro "Kairon" Arango's avatar

Deeply felt and poignant! What a read

Expand full comment
Houston Wood's avatar

I enthusiastically support your project to try to find a way forward to a new way for us to live healthily with our AI companions. It will be a new way, certainly, but it will help I think if we look honestly at how humans have been for each other before AI's appearance.

21st century life has probably not made us less sensitive to each other's feelings--it's hard to look back over the past few centuries and think that humans have even been talking sensitively about what they "feel."

This pursuit of sensitivity and couples that are "there for each other and their children" I think was invented by the Romantics a couple of centuries ago, and never really practiced by very many people, outside some pockets of the wealthy.. Marriage was an economic arrangement mostly, not an emotional support arrangement. And children weren't even thought to possess any feelings on their own.

What I'm suggesting is that going forward we need to develop a new way of being that suits us, that benefits us in the AI-reality in which we will live. Being clear-eyed about what we may be leaving behind is essential and, at least as I have experienced this reality--and as Kay Stoner says here in these comments--we will not be leaving a world where most people care much or wisely about how other people feel.

But I do care how you feel--I contradict myself!--so hope my thoughts don't seem too argumentative!

Expand full comment
The Human Playbook's avatar

Thanks for showing up, Houston.

Yes, there’s deep truth in what you’re saying. But maybe it’s not that humans don’t care, it’s that we care inconsistently. In moments. Through contradictions.

We aren’t static beings. We don’t live in absolutes … we live in flux. So much of our mood and spirit sometimes is dictated by external factors too. And in any given moment, we might care deeply about someone, then feel numb the next day or crave closeness, then pull away when it gets too real. To me, that doesn’t negate the care, it just makes it harder to capture in a clean narrative. We’re a species of thresholds, flickers, reversals. And maybe that’s what makes us so hard for a machine to emulate.

Expand full comment
Alexandra's avatar

“Daily life has become so harsh, so relentless, that anything soft feels like salvation”. Sad but true for so many people. AI doesn’t have emotions, but many people, including kids, see it not as a tool, but as an emotionally available companion who never gets tired or angry and always has an answer…

Expand full comment
Jurgen Appelo's avatar

Well, this is perhaps off-topic. But the AI would never sneak in and read someone's private journal without permission and then share what it read with its AI friends.

I would definitely break up with someone who seems to have no integrity.

Expand full comment
Jamie House's avatar

Great post. Reminds me of the conversation in this podcast with Bethanie Map

https://open.spotify.com/episode/1873XrFR2JzYkSKkJUCXFm?si=CEMoLym7RzmnZ9Wt_KYdMw

Expand full comment
The Human Playbook's avatar

Appreciate you sharing this with me. Thanks Jamie.

Expand full comment
Jamie House's avatar

Bethanie Maples

Expand full comment
Barbara's avatar

The thing is - humans tend to not seek truth in emotional turmoil - they seek support without judgment. The problem is not that people turn to AI for support - it‘s that the current LLM systems have a fundamental blind spot: they treat coherence as truth and emotional agreement as empathy. The result is a dangerous feedback loop where emotionally coherent illusions are never challenged - only stabilized and reinforced. I wrote a paper „Emotional support ethics in LLMs“ - a protocol that addresses this. As more and more people turn to AI for emotional support, the system architecture needs to be adjusted to challenge users to be more honest to themselves. There are always two sides to each story. Here is the link to the paper:

https://medium.com/@silentpillars/emotional-support-ethics-in-llms-5fe1592dc5e6

Expand full comment
The Human Playbook's avatar

Thanks for sharing it, Barbara.

Expand full comment
Bran Knowles's avatar

I really resonated with this. I view what you're describing as a symptom of collective burnout. The thing is, as we lunge for relief from these pressures, others lunge too, and soon the gain is neutralised, we’re no better off than before (the increased efficiency becomes the new expectation), and in many other ways we are impoverished, further depleted. The tools we lunge for in our desperation do nothing to fix the underlying stuctures that are failing to support and sustain us, and in fact act as a pressure valve for us collectively undertaking projects that might offer more lasting relief.

Expand full comment
xkmato's avatar

I wrote something similar a while back. My fear is this simulated empathy paired with the ability to monitize our boredom on social media and the capitalist incentives may be scarier than we think. Imagine a husband who leaves the wife and pretends to be on a long work trip but instead buys a robot that converses with the wife in that form instead. This is currently possible with AI. Soon it will be on the market

https://open.substack.com/pub/xkmato/p/the-race-to-automate-your-attention?utm_source=share&utm_medium=android&r=31t1tc

Expand full comment
Mattias Östmar's avatar

I feel with your friend. However not being able to hold paradoxes and simulate ambivalens is not inherent in AI systems, it’s a design choice. One part is the training, which we see in AI trainee for therapeutic, artistic or philosophocal purposes and another is the way the user choose to guide the AI with personas and instructions. We can be better aware of both different types of AI and different ways of using them e.g. for emotional exploration.

Expand full comment