profile picture

Jennifer Hu

MIT

pragmatics

language models

scalar implicature

alternatives

minimal pairs

interpretability

large language models

prompting

theory of mind

scalar diversity

evaluation

syntax

semantics

6

presentations

41

number of views

SHORT BIO

Jennifer Hu recently earned her PhD at MIT in Brain and Cognitive Sciences, and will be joining the Harvard Kempner Institute as a Research Fellow. She is interested in the computational and cognitive principles that underlie the human capacity for language.

Presentations

Prompting is not a substitute for probability measurements in large language models

Jennifer Hu and 1 other author

Expectations over unspoken alternatives predict pragmatic inferences

Jennifer Hu and 3 other authors

A fine-grained comparison of pragmatic language understanding in humans and language models

Jennifer Hu and 4 other authors

Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models

Yiwen Wang and 3 other authors

Empirical Support for a Rate-Distortion Account of Pragmatic Reasoning

Irene Zhou and 3 other authors

Competition from novel features drives scalar inferences in reference games

Jennifer Hu and 2 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved