profile picture

Catherine Chen

Graduate student @ UC Berkeley

robustness

generalization

layout

distribution shift

brain

multimodal grammar induction

grammar induction with large language models

unsupervised grammar induction

attention

language

6

presentations

8

number of views

SHORT BIO

Catherine Chen is a PhD student at UC Berkeley studying NLP and computational neuroscience. She is advised by Dan Klein and Jack Gallant, and has been supported by a NSF GRFP and an IBM PhD Fellowship. She previously was an undergraduate at Princeton University and then received a Fulbright grant to study causal inference for neuroimaging at LMU Munich/MPI Tuebingen. Outside of research, she likes to run and learn natural languages.

Presentations

Re-evaluating the Need for Visual Signals in Unsupervised Grammar Induction

Boyi Li and 9 other authors

Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents

Catherine Chen and 5 other authors

Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents

Catherine Chen and 5 other authors

Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents

Catherine Chen and 5 other authors

Constructing Taxonomies from Pretrained Language Models

Kevin Lin and 2 other authors

Attention weights accurately predict language representations in the brain

Mathis Lamarre and 2 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved