profile picture

Albert Webson

Brown University, USA

human study

prompt understanding

human baseline

2

presentations

5

number of views

SHORT BIO

I’m a CS PhD and a philosophy MA advised by Ellie Pavlick at Brown University. I study how developmental psychology and comparative cognition inform what types of behavioral and representational evaluations we should have for language models in order to measure to what extent they really understand language.

My current research focuses on evaluating language models’ understanding of prompts. I am a co-first-author of T0 (https://arxiv.org/abs/2110.08207), one of the first LLMs trained on hundreds of manually curated prompts that enable zero-shot generalization to novel tasks. Meanwhile, despite the substantial performance improvement of T0-style instruction tuning, I am also a critic of my own work, presenting my paper at this conference that even T0 and GPT-3 still fall far short of human understanding (https://arxiv.org/abs/2109.01247).

Presentations

Are Language Models Worse than Humans at Following Prompts? It's Complicated

Albert Webson and 3 other authors

Do Prompt-Based Models Really Understand the Meaning of Their Prompts?

Albert Webson and 1 other author

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved