Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
People increasingly seek healthcare information from Large Language Models (LLMs), yet the nature of these conversational interactions and their inherent risks remain largely unexplored. In this paper, we filter large-scale conversational AI datasets to achieve HealthChat-14K, a curated dataset of 14K real-world conversations composed of 62K user messages. We use HealthChat-14K and a clinician-driven taxonomy for how users interact with LLMs when seeking healthcare information in order to systematically study users' conversational trajectories, interaction patterns, emotional behaviors, and sycophancy-inducing interactions. Our analysis reveals insights into how users seek healthcare information, including the nature of health information users seek, their typical conversational trajectories, expressions of affect, and specific interaction patterns related to conversational challenges and leading questions, underscoring the need for improvements in the healthcare support capabilities of LLMs deployed as conversational AI. We will release our analyzed conversations and corresponding analysis artifacts in a curated dataset to foster future research.