Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/rp4a-7v17

workshop paper

ACL 2024

August 15, 2024

Bangkok, Thailand

How can large language models become more human?

keywords:

sheaf theory

garden-path sentences

large language models

psycholinguistics

Psycholinguistic experiments reveal that efficiency of human language use is founded on predictions at both syntactic and lexical levels. Previous models of human prediction exploiting LLMs have used an information theoretic measure called \emph{surprisal}, with success on naturalistic text in a wide variety of languages, but under-performance on challenging text such as garden path sentences. This paper introduces a novel framework that combines the lexical predictions of an LLM with the syntactic structures provided by a dependency parser. The framework gives rise to an \emph{Incompatibility Fraction}. When tested on two garden path datasets, it correlated well with human reading times, distinguished between easy and hard garden path, and outperformed surprisal.

Downloads

Transcript English (automatic)

Next from ACL 2024

So many design choices: Improving and interpreting neural agent communication in signaling games
workshop paper

So many design choices: Improving and interpreting neural agent communication in signaling games

ACL 2024

Timothee Mickus
Timothée Bernard and 1 other author

15 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved