Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/jddr-y017

workshop paper

ACL 2024

August 15, 2024

Bangkok, Thailand

Hierarchical syntactic structure in human-like language models

keywords:

hierarchy

fmri

parsing

transformers

syntax

Language models (LMs) are a meeting point for cognitive modeling and computational linguistics. How should they be designed to serve as adequate cognitive models? To address this question, this study contrasts two Transformer-based LMs that share the same architecture. Only one of them analyzes sentences in terms of explicit hierarchical structure. Evaluating the two LMs against fMRI time series via the surprisal complexity metric, the results implicate the superior temporal gyrus. These findings underline the need for hierarchical sentence structures in word-by-word models of human language comprehension.

Downloads

Transcript English (automatic)

Next from ACL 2024

Do LLMs Agree with Humans on Emotional Associations to Nonsense Words?
workshop paper

Do LLMs Agree with Humans on Emotional Associations to Nonsense Words?

ACL 2024

+3
Yui Miyakawa and 5 other authors

15 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved