Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
AI systems are widely proposed as second-opinion advisors in clinical diagnosis, offering the promise of enhancing decision accuracy and clinician confidence while preserving human oversight. However, successful deployment in real-world practice faces a critical barrier: clinicians' reliance on AI is often miscalibrated, manifesting as misuse (over-reliance driven by automation bias) and disuse (under-utilization driven by self-anchoring bias). This paper addresses these deployment challenges by systematically analyzing how such reliance patterns affect diagnostic accuracy, confidence, and decision-making across diverse medical specialties. We report results from controlled simulations involving over 300 medical professionals across six diagnostic settings—including knee MRI analysis, spinal X-rays, cardiac ECG evaluation, and gastrointestinal endoscopy—using a human-first, AI-second workflow. Although AI advice improved average diagnostic accuracy (+2 percentage points) and clinician confidence (+3 points on a normalized scale), overall levels of appropriate reliance remained well below 50%, with disuse emerging as the more prevalent and consequential barrier. We introduce and validate Appropriate Reliance as an actionable metric for assessing and improving human-AI collaboration, providing practical guidance for developers, healthcare institutions, and policymakers seeking to deploy second-opinion AI systems safely and effectively. By identifying the sociotechnical barriers and offering evidence-based design insights, this work supports the emerging application of AI as a collaborative advisor in clinical workflows, charting a clear path toward deployment that enhances diagnostic safety, accountability, and patient care. Specifically, we propose integrating the Appropriate Reliance metric into system development workflows, clinician training, and regulatory evaluations to enable safe and effective deployment of second-opinion AI systems.
