T4K3.news
AI empathy may shift humanness perceptions
A new set of experiments shows emotionally intelligent AI can change how we judge real people, with real workplace and consumer implications.

New experiments show emotionally intelligent AI can shift how people judge real humans, subtly eroding respect for others.
Assimilation of AI emotion reshapes humanness and raises risk of dehumanization
Five experiments published in the Journal of Consumer Psychology examined both embodied AI like humanoid robots and disembodied AI such as chatbots. Participants watched footage of Atlas performing expressive dancing or a utilitarian parkour routine, then read scenarios about workplace changes that could harm employees. Those who watched the expressive robot tended to assign more mind to the AI and were more supportive of harsh, dehumanizing measures toward workers.
A second set of studies tested why the effect occurs. When AI capabilities were described as moderate, the dancing robot still led to greater dehumanization of people. But when the AI was framed as extremely capable, the effect reversed and people treated others as more human, suggesting a contrast effect kicks in when machines look clearly nonhuman. Other experiments focused on whether socio-emotional skills alone drive the result, finding that only emotional capacity in AI produced increased dehumanization, not high cognitive ability. Real-world tasks reinforced the pattern, with participants making choices that favored a company with poor worker conditions after being exposed to emotionally capable AI.
Key Takeaways
"The more we perceive social and emotional capabilities in AI, the more we see real people as machine-like"
Core mechanism described by the author on assimilation
"Design AI to stay human centered or we risk eroding trust"
Editorial takeaway about design responsibility
"Keep humans at the center of AI deployments"
Recommendations tied to deployment
"Empathy in machines can backfire on social values if not handled carefully"
Warning about broader implications
The research highlights a paradox at the heart of modern AI design: making machines seem empathetic can win user acceptance while quietly dulling our sense of shared humanity. If AI in customer service or healthcare appears to feel, people may treat real workers as less worthy of care. This matters not just for morale, but for hiring, wages, and public trust in technology. The findings also point to practical guardrails: avoid blurring the line between machine and human, emphasize practical limits of AI, and build ethical checks into frontline deployments. As AI becomes more embedded in daily life, designers and policymakers will need to balance usefulness with a clear commitment to human dignity.
Highlights
- The more we perceive social and emotional capabilities in AI, the more we see real people as machine-like
- Design AI to stay human centered or we risk eroding trust
- Keep humans at the center of AI deployments
- Empathy in machines can backfire on social values if not handled carefully
AI induced dehumanization risk
The study shows that emotionally capable AI can subtly reduce the perceived humanness of real people, potentially influencing workplace treatment and consumer choices. This raises concerns about worker dignity, trust in technology, and long-term social norms.
Guarding human dignity in an era of smart machines requires thoughtful design and vigilant oversight.
Enjoyed this? Let your friends know!
Related News

Research reveals benefits of gaze-based meditation

Entry-Level Job Skills Shift Due to AI

New insights on the challenges of generative AI

Microsoft identifies jobs most threatened by AI

Princess Anne reveals new hair look in royal portrait

OpenAI models show high hallucination rates

ChatGPT inspires spiritual awakening for Idaho mechanic

TikTok cuts Berlin moderation team with AI and contractors
