Healthcare and therapy systems face a worsening workforce shortage, creating an urgent need for technologies that can support or augment human roles. However, much existing work emphasizes functional-task support while overlooking the emotional impact humans contributeโ€”an omission that is especially critical in care contexts where empathy and emotional support are central to patient well-being. In rehabilitation, for example, robots can deliver highly repeatable, standardized training, yet still fall short of human therapists; a key missing ingredient is the positive affective benefit that typically arises from interpersonal interaction during therapy, which has been underestimated and remains difficult to integrate into technological systems. Interpersonal interaction can strengthen the patientโ€“clinician relationship and improve satisfaction, trust, adherence, and clinical outcomes, but resource constraints make consistent, high-quality empathic interaction hard to sustain. โ€œThis motivates โ€œartificial empathy,โ€ defined as a machineโ€™s capacity to perceive, interpret, and simulate empathic responses during humanโ€“machine interactionโ€”implemented via algorithmic recognition and response rather than genuine affective experience. โ€ said the author Tianyu Jia, a researcher at Imperial College London, โ€œThis review surveys major platformsโ€”multiplayer games, social robots, and virtual agentsโ€”and how they incorporate interpersonal interaction to advance artificial empathy in therapy and healthcare.โ€


Rock our ‘Darwin IYKYK’ tee and flex your evolved taste.

This review surveys three major platform families through which interpersonal interaction can be embedded to advance artificial empathy in therapy and healthcare: multiplayer games, social robots, and virtual agents. Multiplayer games are framed as a medium for bringing real humanโ€“human interaction into digital rehabilitation/training, using cooperative or competitive structures to elicit social support, motivation, and engagementโ€”yet reported benefits can be inconsistent, often hinging on task design and individual differences, which calls for more careful interaction design and personalization. Social robots leverage embodiment and multimodal social cues (e.g., gaze, posture, facial expression, speech, and touch) to strengthen companionship and interaction quality, frequently acting as coach/companion-like partners; the review discusses expectation-management issues driven by appearance/behavior design and highlights the trend of integrating LLMs to improve dialogue generation and personalization. Virtual agents emphasize scalability and lower deployment cost, delivering social behaviors via screens or VR/AR/MR and potentially enhancing presence with haptics/wearables; the review positions AI and generative models as key drivers toward more natural, personalized, and emotionally intelligent interactions.



After reviewing the three platforms, the paper turns to what it would take to achieve stronger artificial empathy: future humanโ€“agent systems should support deeper, more seamless closed-loop interaction by estimating usersโ€™ cognitive and affective states in real time, with emotion recognition as a central capabilityโ€”drawing on behavioral cues (voice, text, facial expressions, gestures, eye tracking) and physiological signals (EEG, ECG, HR, GSR, etc.) to infer emotions and enable closed-loop regulation. It notes that emotion recognition is trending toward more contextual, multimodal approaches and improved hardware support, yet current work often lacks ecological validity and struggles to generalize across cultures; beyond emotions, interaction-relevant constructs such as trust, engagement, social presence, and rapport are important but still lack accurate, real-time quantitative models and remain hard to use for flexible behavioral adaptation. To move from short-term reactions to sustained personalization, the review emphasizes adapting interaction styles using inferred personality traits and memory of past interactions, routines, and preferences to improve comfort, trust, and adherence. It then outlines critical concerns: clinical/practical validation is still limited, many studies use small samples and short-term interventions with heterogeneous measures and weak controlsโ€”motivating unified evaluation frameworks and more rigorous, longitudinal designs. โ€œEthically, overreliance on simulated โ€œpseudo-empathyโ€ can foster misuse and false attachment and displace real relationships, while GenAI hallucinations can be harmful in high-stakes healthcareโ€”so artificial empathy should remain an adjunct rather than a replacement for interpersonal communication.โ€ said Tianyu Jia.

IMAGE CREDIT: NASA.


Simple blood test could spot dementia years earlier, research shows
Research from the University of East Anglia reveals that blood tests may …
USC scientists build a memory chip that survives temperatures hotter than lava
Researchers at USC developed a memristor that operates reliably at 700 degrees …

Leave a Reply

Trending

Discover more from Scientific Inquirer

Subscribe now to keep reading and get access to the full archive.

Continue reading