Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/jsfz-vy40

poster

ACL 2024

August 13, 2024

Bangkok, Thailand

Emergent Word Order Universals from Cognitively-Motivated Language Models

keywords:

linguistic typology

cognitive modeling

psycholinguistics

The world's languages exhibit certain so-called typological or implicational universals; for example, Subject-Object-Verb (SOV) languages typically use postpositions. Explaining the source of such biases is a key goal of linguistics. We study word-order universals through a computational simulation with language models (LMs). Our experiments show that typologically-typical word orders tend to have lower perplexity estimated by LMs with cognitively plausible biases: syntactic biases, specific parsing strategies, and memory limitations. This suggests that the interplay of cognitive biases and predictability (perplexity) can explain many aspects of word-order universals. It also showcases the advantage of cognitively-motivated LMs, typically employed in cognitive modeling, in the simulation of language universals.

Downloads

SlidesTranscript English (automatic)

Next from ACL 2024

VerifiNER: Verification-augmented NER via Knowledge-grounded Reasoning with Large Language Models
poster

VerifiNER: Verification-augmented NER via Knowledge-grounded Reasoning with Large Language Models

ACL 2024

+2Hyungjoo Chae
Seoyeon Kim and 4 other authors

13 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved