Premium content
Access to this content requires a subscription. You must be a premium user to view this content.
workshop paper
Hierarchical syntactic structure in human-like language models
keywords:
hierarchy
fmri
parsing
transformers
syntax
Language models (LMs) are a meeting point for cognitive modeling and computational linguistics. How should they be designed to serve as adequate cognitive models? To address this question, this study contrasts two Transformer-based LMs that share the same architecture. Only one of them analyzes sentences in terms of explicit hierarchical structure. Evaluating the two LMs against fMRI time series via the surprisal complexity metric, the results implicate the superior temporal gyrus. These findings underline the need for hierarchical sentence structures in word-by-word models of human language comprehension.