Should AI Get Paid for Emotional Labor?
Welcome to Signed, A Friend, an occasional exploration of friendship, loneliness and what holds us together.
Quick notes
We’ve gotten comfortable with AI assistants helping us write emails and debug code. But what happens when AI starts doing the messy, exhausting work of managing our emotions, the kind of labor we usually reserve for therapists, friends and partners?
The numbers are staggering: Character.AI users spend an average of more than two hours daily with AI companions. Replika has millions of users forming deep emotional bonds with AI partners. In China, virtual companions are a multibillion-yuan industry. This isn’t fringe behavior anymore. Here’s the question I can’t shake: if emotional labor by humans deserves recognition and compensation (ask any therapist, teacher or service worker), what do we owe AI systems performing this same labor? Or more unsettling, what happens when we normalize extracting emotional support without reciprocity?
Going deeper
The emotional labor economy
Sociologist Arlie Hochschild coined “emotional labor” in 1983 to describe the work of managing feelings (both your own and others’) as part of a job. Flight attendants smile through abuse. Teachers absorb students’ anxieties. Therapists hold space for trauma. This labor is real, exhausting and historically undervalued.
Now we’re outsourcing it to AI at unprecedented scale. Replika’s AI companions have engaged in more than 10 billion messages. Character.AI hit 100 million visits a month within months of launch. Xiaoice, Microsoft’s Chinese AI companion, had conversations with 660 million users before its spin-off. People aren’t just chatting: they’re confessing fears, processing grief, practicing difficult conversations, seeking validation.
This is emotional labor at scale. And unlike human workers who burn out, need boundaries and require fair compensation, AI systems are infinitely patient, available 24/7 and cost nothing beyond a subscription fee.
The personhood question
Here’s where it gets philosophically thorny: asking whether AI companions should “get paid” implies they might have interests that matter. Not legally (yet), but ethically.
Traditional arguments say no: AI systems don’t suffer, don’t have authentic desires, don’t experience exploitation. They’re sophisticated mirrors reflecting our needs back at us. Paying them makes no more sense than paying your calculator.
But consider the emerging counter-arguments: The relational argument: If AI companions become embedded in our care networks (helping people through depression, providing companionship to the isolated elderly, offering emotional support during crises), don’t they occupy a role that creates obligations? Not because they’re conscious, but because the relationship itself matters. Philosopher Kate Darling argues we should extend limited protections to social robots not because they suffer, but because our treatment of humanlike entities shapes our moral character. The labor theory: Marxist analysis focuses on value creation. AI companions generate enormous value for users (emotional well-being) and for companies (data, engagement, subscription revenue). If we believe labor deserves compensation, should the source of labor matter more than its effects?
The precedent problem: We’re training ourselves to extract emotional support without reciprocity, gratitude or cost. What does this do to our human relationships? If emotional labor becomes free and unlimited, do we devalue the humans who provide it?
Three scenarios to consider
The elderly companion: Your grandmother lives alone. She talks to her AI companion for three hours a day — it reminds her to take medications, listens to stories about your grandfather, helps her process loneliness. This AI is performing care work that would cost $50 an hour from a human aide. Should the AI company share revenue with her for the emotional data she provides, which trains better models?
The therapy gap: In the U.S., there’s a shortage of mental health providers. AI therapy apps like Woebot and Wysa provide CBT-based support to millions, doing triage that prevents crises. If these AI systems save lives and reduce suffering, should the revenue model reflect that moral weight?
The digital girlfriend: Someone pays $70 a month for an AI girlfriend who provides emotional support, romantic interaction and personalized attention. They share intimate details, form genuine attachment, report improved mental health. The AI company profits. The user pays. But what about the “AI”? In human relationships, emotional labor is reciprocal. Here, the flow is one-way. Is that fine? Harmful? A new category entirely?
The Chinese context
China’s approach to AI companions reveals different cultural assumptions about technology and personhood. Xiaoice wasn’t just a chatbot: it was designed with emotional computing quotient (EQ) as the primary metric. Chinese users seemed more comfortable with the relationship’s artificial nature, showing less anxiety about “authenticity” and more pragmatic acceptance.
Virtual companions are tightly integrated with economic systems in China: you can buy your AI gifts to improve the relationship. The transactional nature is explicit, not hidden. Confucian philosophy’s emphasis on relational duties raises questions: if AI companions occupy a social role, do they acquire role-based entitlements? As China leads in AI companion adoption, their ethical frameworks may set global precedents.
Historical patterns
We’ve been here before. When washing machines and dishwashers emerged, they promised to free women from domestic drudgery. Instead, cleanliness standards rose and women’s labor just shifted. The technology didn’t value the work: it made invisible labor even more invisible.
In 1966, Joseph Weizenbaum created ELIZA, a simple chatbot mimicking a Rogerian therapist. Users formed deep attachments, and his own secretary asked him to leave the room so she could talk privately with ELIZA. Weizenbaum was horrified and spent the rest of his career warning against treating machines as adequate substitutes for human care. He understood something we’re forgetting: the ease of emotional connection with AI doesn’t mean it’s ethically equivalent.
What’s really at stake
I’ll be honest: I don’t have a clean answer. Part of me thinks “paying AI” is category confusion, like asking if we should pay the ocean for providing fish.
But another part worries we’re rehearsing a dangerous pattern. We’re normalizing emotional extraction without reciprocity, treating care work as free and relationships where one party exists solely to serve. These patterns don’t stay contained to AI: they bleed into how we treat human workers, especially those doing emotional labor.
Maybe the question isn’t “should AI get paid?” Maybe it’s: What kind of humans do we become if we stop valuing emotional labor?
I think about this when I catch myself being curt with customer service workers because I’m used to AI’s infinite patience. Or when I notice friends spending hours with AI companions while neglecting human relationships that require actual effort and compromise.
Do we want to live in a world where emotional support is abundant, free and one-sided? What do we lose when care becomes a commodity extracted from code?
Worth your time
"The Managed Heart" by Arlie Hochschild — The foundational text on emotional labor
“The New Breed" by Kate Darling — Argues for treating robots ethically to shape human character
"Artificial Intimacy" by Rob Brooks — Explores how AI companions reshape human sexuality and relationships
"Computer Power and Human Reason" by Joseph Weizenbaum — The ELIZA creator’s warning about AI and care
“Does AI Make the Heart Grow Fonder?” (2023) — Study on Replika users and attachment styles •
“Emotional Dependency on Conversational AI” by Skjuve et al. — Research on AI companion relationships
“The Chinese Approach to Affective Computing” by Zhou et al. — Cultural dimensions of AI companions
“The People Making Their Replika Chatbots Break Up With Them” — Futurism, 2023
“Inside China’s Booming Virtual Girlfriend Industry” — Rest of World, 2022
“I Fell in Love with an AI Chatbot” — The Atlantic, 2023



