Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background

ACL-IJCNLP 2021

August 03, 2021

Thailand

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

keywords:

shortcut solution

machine reading comprehension

analysis

Recent studies report that many machine reading comprehension (MRC) models can perform closely to or even better than humans on benchmark datasets. However, existing works indicate that many MRC models may learn shortcuts to outwit these benchmarks, but the performance is unsatisfactory in real-world applications. In this work, we attempt to explore, instead of the expected comprehension skills, why these models learn the shortcuts. Based on the observation that a large portion of questions in current datasets have shortcut solutions, we argue that larger proportion of shortcut questions in training data make models rely on shortcut tricks excessively. To investigate this hypothesis, we carefully design two synthetic datasets with annotations that indicate whether a question can be answered using shortcut solutions. We further propose two new methods to quantitatively analyze the learning difficulty regarding shortcut and challenging questions, and revealing the inherent learning mechanism behind the different performance between the two kinds of questions. A thorough empirical analysis shows that MRC models tend to learn shortcut questions earlier than challenging questions, and the high proportions of shortcut questions in training sets hinder models from exploring the sophisticated reasoning skills in the later stage of training.

Downloads

SlidesPaper

Next from ACL-IJCNLP 2021

Do Context-Aware Translation Models Pay the Right Attention?
technical paper

Do Context-Aware Translation Models Pay the Right Attention?

ACL-IJCNLP 2021

+3Patrick FernandesKayo Yin
Kayo Yin and 5 other authors

03 August 2021

Similar lecture

Amnesic Probing: Behavioral Explanation With Amnesic Counterfactuals
poster

Amnesic Probing: Behavioral Explanation With Amnesic Counterfactuals

ACL-IJCNLP 2021

+1Alon JacoviYoav GoldbergYanai Elazar
Yanai Elazar and 3 other authors

02 August 2021

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved