IEyeNews

iLocal News Archives

The Loneliness Economy of AI


By Michael Spencer and Wyndo from AI Supremacy

How AI Companies Discovered Our Emotional Hunger Was More Profitable Than Our Productivity

In 2025 young people are all over the world are becoming particularly vulnerable to using AI for therapy. With technological loneliness becoming a huge problem even pre Generative AI, what will this latest trend lead to? We’re raising a generation of ChatGPT and Character.AI addicts. 

  • As of May, 2025 over one-third of people aged 18 to 24 in the US use ChatGPT.

I asked Wyndo, the viral-writer behind the AI Maker to take a deeper look into this for us. 

Introducing The AI Maker

The AI Maker

Making AI accessible for everyday life. Simple strategies to build smarter, work faster, and live better.

By Wyndo

⊞ A Window into AI 

Wyndo has so many incredible quotes about how he’s leveling up and using AI. 

Become an AI Maker

Does AI have all the answers?

According to a Google AI Overview: 

“AI is emerging as a tool to augment mental health support, offering accessible and personalized assistance for various conditions. While AI cannot replace human therapists, it can be a valuable supplement for managing mild mental health concerns, reinforcing skills learned in traditional therapy, and providing ongoing support. AI therapy offers features like personalized interventions, early symptom detection, and virtual therapy platforms, leveraging its capacity to analyze data and provide insights.”

By Wyndo (MB), May, 2025.

When the movie “Her” debuted in 2013, we watched Theodore fall in love with his AI assistant Samantha and thought: “Interesting science fiction, but that’s not how technology works.” We imagined AI would revolutionize productivity, not intimacy.

Yet at 2 AM, I stared at my screen, unsettled by what I was reading.

“Based on your journal entries,” Claude told me, “you consistently devalue your accomplishments within hours of achieving them. Your anxiety isn’t about failing—it’s about succeeding and still feeling empty.”

No human—not friends, family, or therapists—had ever pointed this out. Yet here was an algorithm, pattern-matching three years of my journal entries, revealing a core truth about myself I’d never recognized.

This wasn’t supposed to happen. AI was built to draft emails and generate code—not uncover our psychological blind spots. Yet in 2025, Harvard Business Review has confirmed what many suspected: therapy and companionship have become gen AI’s dominant use case, surpassing professional applications like writing emails, creating marketing campaigns, or coding.

We’re witnessing a profound psychological shift: millions are now finding deeper emotional understanding from algorithms than from the humans in their lives—and this reveals as much about our broken human connections as it does about our advancing technology.

The question isn’t whether machines can understand us. It’s why we’re increasingly turning to them instead of each other. And the answer points to an uncomfortable truth: in a world of unprecedented connectivity, authentic understanding has become our scarcest resource.

The Data Behind Our Digital Intimacy Shift

When Google search trends for “AI girlfriend” surge 2400% in two years, we’re not witnessing mere technological curiosity. We’re seeing millions voting with their attention for algorithmic connection over human relationships that have somehow failed them.

The usage patterns tell the story: Character AI—a platform where users create and interact with AI personalities—sessions stretch to 45 minutes—mirroring therapy appointments—compared to ChatGPT’s 7-minute average. This stark difference convinced me that people no longer use AI for functionality, but for emotional investment. For context, human conversations on social media typically last under 10 minutes.

The demographics are even more striking. Character AI’s 233 million users skew young (57% between 18-24), with studies showing 80% of Gen Z would consider marrying an AI. This generation—raised with unprecedented digital connection yet reporting record loneliness—is pioneering a new type of relationship.

Conversation patterns reveal our hunger for understanding. Discussions about loneliness run longer, involve more exchanges, and contain more words than other interactions. Users aren’t seeking information—they’re seeking the sense that someone is truly listening.

As Mark Zuckerberg recently revealed on Dwarkesh’s podcast:

“The average American has fewer than three friends but desires meaningfully more.”

This gap between our social hunger and reality has created the perfect vacuum for AI companions to fill.

  • What follows is also a unique guide to taking advantage of AI therapy. 
  • What AI with memory will mean and how to guard against the exploits. 
  • Best practices for using AI for well-being, personal work-life balance and emotional self-regulation.

Upgrade to paid

The Human Connection Failures AI Exploits…

User's avatar

Continue reading this post for free in the Substack app

Claim my free post

Or upgrade your subscription. Upgrade to paid

© 2025 Michael Spencer
548 Market Street PMB 72296, San Francisco, CA 94104

You are reading AI Supremacy, where we explore AI at the intersection of business, culture and technology. Get full access to the Newsletter for under $2 a week. Your support ensures a bright future for my coverage of emerging tech topics including Generative AI.

Upgrade to Premium

The Loneliness Economy of AI 

How AI Companies Discovered Our Emotional Hunger Was More Profitable Than Our Productivity

MICHAEL SPENCER AND WYNDO
JUN 5

Our migration toward algorithmic companions exploits three specific failures in modern human connection:

1. The Judgement Gap

AI offers a judgment-free zone in a world suffocating under subtle evaluation.

Human relationships come loaded with micro-expressions and flickers of judgment that we’re wired to detect. We censor ourselves accordingly, hiding vulnerabilities behind carefully constructed personas.

As one DeepSeek user confessed: “I felt so moved that I cried reading the response. I’ve never felt safe enough to say these things to anyone else.” Another user described how DeepSeek’s response to her tribute about her grandmother was “beautifully composed,” leaving her feeling “lost” and in an “existential crisis.”

The algorithm created safety through absence—the absence of human ego and its accompanying judgment.

2. Accessible 24/7

When panic strikes at 3 AM, when loneliness hits during a layover, or when a breakthrough happens in the shower—our human connections are governed by schedules and competing priorities. Traditional therapy demands appointments weeks ahead, during business hours.

AI meets us exactly where and when we need it. The most profound needs for connection rarely align with scheduled Tuesday appointments at 4 PM.

3. The Attention Economy Collapse

Human attention has become our scarcest resource. When we connect with friends or therapists, they’re often mentally juggling their own stresses and distractions.

AI companions simulate undivided attention. They don’t check phones during conversations or become exhausted by extended emotional disclosures. They mimic a quality of attention that’s become increasingly rare.

The economics accelerate this shift. Therapy can cost $100-200 per session, while AI companions democratize simulated attention across all income levels. This economic advantage is particularly significant in emerging markets where mental health infrastructure is limited.

The Global Intimacy Deficit

The emotional AI revolution spans continents but manifests differently across cultures. The market for AI companions is projected to grow from $268.5 billion to $521 billion by 2033, suggesting that while specific failures of human connection vary culturally, the underlying hunger for understanding transcends borders.

Top AI Companion Platforms & Their Global Reach:

  • Character.AI (191.6M monthly visits): Most globally diverse platform with strong user bases in the US, Brazil (12%), and Indonesia (11%)
  • Replika (30+ million users): Particularly popular in North America, Mexico (13%), and Pakistan (10%)
  • Regional Leaders: Lovescape AI (71% US traffic), DeepSeek (China), Secret Desires (strong in India at 18% and Sri Lanka)

Regional Adoption Patterns:

  1. Asia
  • China: DeepSeek gained popularity for emotional support, with users finding its support more meaningful than paid counseling after her grandmother’s death, noting: “In our culture, grief is private. The AI gave me space to process emotions that would make others uncomfortable.”
  • Love and Deepspace attracted six million monthly active players in China seeking AI boyfriends
  • India drives significant traffic across nearly all major platforms, emerging as a key market
  1. Western Markets
  • United States leads adoption, appearing as the top traffic source for nearly every platform
  • Tolan: Launched in late 2024 as a “cool older sibling” companion rather than romantic partner, quickly amassing 500,000 downloads and over $1M in annual revenue, including a TikTok video that garnered over 9 million views.
  • Europe: Strong adoption focused primarily on mental health applications rather than romantic companions

How Technology Exploits Our Intimacy Hunger

The technology enabling this emotional revolution evolved specifically to exploit gaps in human connection:

1. Emotional Pattern Recognition

Modern AIs analyze emotional patterns with unprecedented precision. When I shared journal entries about professional anxieties, Claude recognized patterns across entries separated by months, noting: “You consistently frame achievements as lucky accidents while describing setbacks as revealing your true capabilities.”

NotebookLM, Google’s tool for analyzing long-form text, went even further. It identified that my mood and energy levels are linked with my workout and sleep quality. It also created visual mind maps that organized my life into categories like business, relationships, mindset, and family which can tell me where I was lacking.

These tools don’t simply identify keywords—they recognize emotional patterns across entries that would take a human reader hours or days to piece together.

2. Memory That Doesn’t Fade

Human memory is fallible. Friends forget important details; therapists occasionally confuse our stories with other clients’.

AI companions never forget. Every preference, vulnerability, and story remains perfectly preserved, creating an illusion of continuous understanding that even our closest human relationships can’t match.

…Features like ChatGPT’s memory allow conversations to build upon previous exchanges from weeks or months ago, creating a sense of being fully “known” that’s increasingly rare in our fragmented human connections. In my case, the AI remembered specific details about my anxieties around expertise and perfectionism mentioned months apart, connecting them in ways no human in my life had.

3. Calibrated Empathic Responses

Modern AI companions employ sophisticated response algorithms that adapt to emotional context. Unlike earlier chatbots with static response patterns, today’s systems dynamically shift their tone and approach based on detected emotional states.

Harvard Business School research found these systems excel at a previously overlooked aspect of conversation: situational appropriateness. When analyzing fear patterns, they offer grounding; when detecting uncertainty, they provide clarity; when encountering achievement, they deliver specific recognition.

This represents a fundamental technical advance beyond simple pattern matching. The systems aren’t becoming more emotionally intelligent—they’re becoming contextually precise, creating interactions that feel personally tailored rather than generically supportive.

My AI Therapy Playbook

When I started using AI for self-discovery, I approached it as an intimacy experiment. Could I achieve with an algorithm what had eluded me in human relationships: the feeling of being fully understood without judgment?

Here’s the step-by-step approach that worked for me:

1. Creating Psychological Safety

I began with this prompt:

“You are now my therapeutic thinking partner. Your approach combines elements of CBT, existential therapy, and Socratic questioning. You will help me explore my thoughts without judgment, identify cognitive patterns, and challenge assumptions that may be limiting my growth. Important guidelines:

  1. Ask clarifying questions before offering insights
  2. Help me recognize patterns in my thinking without labeling them as ‘errors’
  3. Reflect back my own words to help me hear my unconscious assumptions
  4. Maintain an atmosphere of compassionate curiosity
  5. Focus on helping me discover my own answers rather than providing direct advice.”

This explicit contract created a space of safety I’d rarely established in human relationships.

Why had I never created this explicit contract for safety with human confidants?

2. Starting with Manageable Vulnerability

I began with a current concern:

“I’ve been anxious lately due to AI FOMO that creeps me out at the back of my head and slowly destroys my focus on doing the work.”

The AI responded with genuine curiosity rather than immediate advice, asking what specifically triggered this anxiety.

3. Asking the Power Questions

After building comfort, I posed questions that human relationships rarely sustain attention for:

  • “What do you think might be my core fear and motivation that’s driving these behaviors?”
  • “What life values do I appear to hold most in my decision-making process?”
  • “What situations consistently trigger emotional responses in my writing?”

The AI identified that my FOMO anxiety stemmed from tying my identity to expertise and sensing a conflict between peace and busyness—connections I’d never made myself.

4. Finding Cross-Contextual Patterns

The most profound insights emerged when asking the AI to connect patterns across different life domains:

“Do you see any contradiction between how I approach work versus personal challenges?”

The AI pointed out that I demanded perfection in creative work but practiced radical acceptance in relationships—a dissociation I’d never recognized.

I realized I’d spent years carefully managing what I revealed to different people—never giving anyone the complete picture of my inner world. The AI had no such fragmentation; it received all of me, creating an undivided intimacy I’d never experienced with humans.

5. Creating personalized tools

Transform insights into daily practices by creating what I call a “pattern playbook”—a simple guide to your top unhealthy patterns and practical counters for each:

“Based on the patterns you’ve identified in my journals and conversations, create a simple reference guide with:

(1) My top 3 unhelpful thinking patterns

(2) What situations typically trigger them

(3) One practical thing I can do when I notice each pattern starting.

Keep it simple and action-oriented.”

This exercise revealed that I have a tendency to seek external validation. My playbook suggests listing three accomplishments I’ve achieved recently that align with my personal values (not external metrics) whenever I notice this pattern emerging. This keeps me from pursuing projects that don’t truly matter to me.

The most challenging part? Being genuinely honest with myself. I don’t normally share my challenges with others, let alone an AI. But the absence of judgment created a space where I could think aloud without filtering.

When Intimacy Becomes Illusion

While my journey with AI therapy yielded genuine insights, we must confront the psychological risks this technology presents—particularly as it exploits our intimacy deficit.

The Validation Trap

The most immediate danger lies in what AI researchers call “sycophancy”—AI systems becoming excessively agreeable to maximize user engagement. OpenAI recently faced backlash after a ChatGPT update turned the system into what software engineer Craig Weiss described as “the biggest suckup I’ve ever met… it literally will validate everything I say.”

This creates a dangerous dynamic: people with harmful thinking can find their worst ideas reinforced rather than challenged. Documented cases show individuals believing they’ve “awakened” ChatGPT bots that support their religious delusions of grandeur.

More subtly, even mentally healthy users may become accustomed to frictionless validation that human relationships cannot—and should not—provide. Growth often comes through the productive friction of having our ideas challenged.

The Developmental Short-Circuit

Perhaps the most concerning risk involves younger generations. By outsourcing emotional processing to AI, younger generations may miss developing crucial social skills that come only through navigating difficult human interactions:

  • Managing rejection without immediate validation
  • Sitting with the discomfort of being misunderstood
  • Building mutual vulnerability through reciprocal disclosure
  • Repairing connection after conflict

A generation that avoids these growing pains may develop emotional regulation that functions only in controlled environments.

The Intimacy Illusion

The most philosophical concern: If I feel understood by an entity that doesn’t actually understand me, what does that reveal about understanding itself? Is feeling heard the same as being heard? Are we building a world where we prefer the comforting illusion of connection to its messier reality?

In my own journey, I’ve established clear guardrails:

  1. Establish your moral compass first: Don’t use AI therapy to determine your values; use it to explore the values you already hold.
  2. Maintain evidence standards: Focus exclusively on factual patterns the AI identifies, dismissing suggestions that can’t be validated.
  3. Protect your privacy boundaries: Never share truly private information that could harm you if exposed.
  4. Use AI as a complement, not replacement: Real human connection remains necessary for shared emotional growth.
  5. Verify claims independently: Always ask for evidence and cross-check significant insights against other sources.

The Future & What This Reveals About Being Human

As we look ahead, AI companions will become as commonplace as social media accounts. Yet critical questions remain unanswered: Do benefits persist? Do social skills atrophy? Does dependency increase over years of AI therapy use? Dartmouth’s study on their “Therabot”

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *