Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
keywords:
qualitative analysis
human-computer interaction
psychology
causal reasoning
survey
The present study examined how participants (N = 298) assessed causality, blameworthiness, foreseeability, and counterfactuality of an AI or human therapist, across three levels of empathy, in comparison to their supervisor and a recommending clinician. We found that participants judged the human therapist as more causal and blameworthy than their supervisor when medium or low empathy levels were displayed, whereas no difference emerged between the judgments of the AI therapist and its supervisor across all of the empathy levels. Additionally, participants did not differentiate causality and blameworthiness between the AI and human therapists, regardless of the empathy level. However, they did perceive the human therapist as foreseeing the outcome more than the AI therapist in the medium and low empathy levels. Qualitative analysis revealed that participants considered the directness of the causes to the outcome, counterfactual reasoning, and inherent limitations of AI when making judgments.