Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background

ACL-IJCNLP 2021

August 03, 2021

Thailand

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Modern task-oriented semantic parsing approaches typically use seq2seq transformers to map textual utterances to semantic frames comprised of intents and slots. While these models are empirically strong, their specific strengths and weaknesses have largely remained unexplored. In this work, we study BART and XLM-R, two state-of-the-art parsers, across both monolingual and multilingual settings. Our experiments yield several key results: transformer-based parsers struggle not only with disambiguating intents/slots, but surprisingly also with producing syntactically-valid frames. Though pre-training imbues transformers with syntactic inductive biases, we find the ambiguity of copying utterance spans into frames often leads to tree invalidity, indicating span extraction is a major bottleneck for current parsers. However, as a silver lining, we show transformer-based parsers give sufficient indicators for whether a frame is likely to be correct or incorrect, making them easier to deploy in production settings.

Downloads

Paper
access premium content

Next from ACL-IJCNLP 2021

Investigating label suggestions for opinion mining in German Covid-19 social media
technical paper

Investigating label suggestions for opinion mining in German Covid-19 social media

ACL-IJCNLP 2021

+3Ji-Ung LeeIryna GurevychTilman Beck
Tilman Beck and 5 other authors

03 August 2021

Similar lecture

Generalising Multilingual Concept-to-Text NLG with Language Agnostic Delexicalisation
technical paper

Generalising Multilingual Concept-to-Text NLG with Language Agnostic Delexicalisation

ACL-IJCNLP 2021

Gerasimos LampourasGiulio Zhou
Giulio Zhou and 1 other author

03 August 2021

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2026 Underline - All rights reserved