
Premium content
Access to this content requires a subscription. You must be a premium user to view this content.

poster
Fine-Tuned Machine Translation Metrics Struggle in Unseen Domains
keywords:
domain bias
evaluation metrics
evaluation
machine translation
We introduce a new, extensive multidimensional quality metrics (MQM) annotated dataset covering 11 language pairs in the biomedical domain. We use this dataset to investigate whether machine translation (MT) metrics which are fine-tuned on human-generated MT quality judgements are robust to domain shifts between training and inference. We find that fine-tuned metrics exhibit a substantial performance drop in the unseen domain scenario relative to both metrics that rely on the surface form and pre-trained metrics that are not fine-tuned on MT quality judgments.