Healthcare and therapy systems face a worsening workforce shortage, creating an urgent need for technologies that can support or augment human roles. However, much existing work emphasizes functional-task support while overlooking the emotional impact humans contributeโan omission that is especially critical in care contexts where empathy and emotional support are central to patient well-being. In rehabilitation, for example, robots can deliver highly repeatable, standardized training, yet still fall short of human therapists; a key missing ingredient is the positive affective benefit that typically arises from interpersonal interaction during therapy, which has been underestimated and remains difficult to integrate into technological systems. Interpersonal interaction can strengthen the patientโclinician relationship and improve satisfaction, trust, adherence, and clinical outcomes, but resource constraints make consistent, high-quality empathic interaction hard to sustain. โThis motivates โartificial empathy,โ defined as a machineโs capacity to perceive, interpret, and simulate empathic responses during humanโmachine interactionโimplemented via algorithmic recognition and response rather than genuine affective experience. โ said the author Tianyu Jia, a researcher at Imperial College London, โThis review surveys major platformsโmultiplayer games, social robots, and virtual agentsโand how they incorporate interpersonal interaction to advance artificial empathy in therapy and healthcare.โ

This review surveys three major platform families through which interpersonal interaction can be embedded to advance artificial empathy in therapy and healthcare: multiplayer games, social robots, and virtual agents. Multiplayer games are framed as a medium for bringing real humanโhuman interaction into digital rehabilitation/training, using cooperative or competitive structures to elicit social support, motivation, and engagementโyet reported benefits can be inconsistent, often hinging on task design and individual differences, which calls for more careful interaction design and personalization. Social robots leverage embodiment and multimodal social cues (e.g., gaze, posture, facial expression, speech, and touch) to strengthen companionship and interaction quality, frequently acting as coach/companion-like partners; the review discusses expectation-management issues driven by appearance/behavior design and highlights the trend of integrating LLMs to improve dialogue generation and personalization. Virtual agents emphasize scalability and lower deployment cost, delivering social behaviors via screens or VR/AR/MR and potentially enhancing presence with haptics/wearables; the review positions AI and generative models as key drivers toward more natural, personalized, and emotionally intelligent interactions.
After reviewing the three platforms, the paper turns to what it would take to achieve stronger artificial empathy: future humanโagent systems should support deeper, more seamless closed-loop interaction by estimating usersโ cognitive and affective states in real time, with emotion recognition as a central capabilityโdrawing on behavioral cues (voice, text, facial expressions, gestures, eye tracking) and physiological signals (EEG, ECG, HR, GSR, etc.) to infer emotions and enable closed-loop regulation. It notes that emotion recognition is trending toward more contextual, multimodal approaches and improved hardware support, yet current work often lacks ecological validity and struggles to generalize across cultures; beyond emotions, interaction-relevant constructs such as trust, engagement, social presence, and rapport are important but still lack accurate, real-time quantitative models and remain hard to use for flexible behavioral adaptation. To move from short-term reactions to sustained personalization, the review emphasizes adapting interaction styles using inferred personality traits and memory of past interactions, routines, and preferences to improve comfort, trust, and adherence. It then outlines critical concerns: clinical/practical validation is still limited, many studies use small samples and short-term interventions with heterogeneous measures and weak controlsโmotivating unified evaluation frameworks and more rigorous, longitudinal designs. โEthically, overreliance on simulated โpseudo-empathyโ can foster misuse and false attachment and displace real relationships, while GenAI hallucinations can be harmful in high-stakes healthcareโso artificial empathy should remain an adjunct rather than a replacement for interpersonal communication.โ said Tianyu Jia.
IMAGE CREDIT: NASA.





Leave a Reply