Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Automatically generated radiology reports often receive high scores from existing evaluation metrics but fail to earn clinicians’ trust. This gap reveals fundamental flaws in how current metrics assess the quality of generated reports. We rethink the design and evaluation of these metrics and propose a clinically grounded Meta-Evaluation Framework. We define clinically grounded criteria spanning clinical alignment and key metric capabilities, including discrimination, robustness, and monotonicity. Using a fine-grained dataset of ground truth and rewritten report pairs annotated with error types and clinical significance labels, we systematically evaluate widely used metrics and uncover their limitations, such as failing to distinguish clinically significant errors, over-penalizing harmless variations, or lacking consistency across error severity levels. Our framework and dataset offer guidance for building more clinically reliable evaluation methods.