profile picture

Yejin Choi

generation

commonsense

summarization

evaluation

large language models

dataset

text generation

distillation

robustness

reasoning

language model

natural language processing

dialogue

reinforcement learning

factuality

58

presentations

230

number of views

2

citations

SHORT BIO

Yejin Choi is Brett Helsel professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research manager at AI2 overseeing the project Mosaic. Her research investigates a wide variety problems across NLP and AI including commonsense knowledge and reasoning, neural language (de-)generation, language grounding with vision and experience, and AI for social good. She is a co-recipient of the ACL Test of Time award in 2021, the CVPR Longuet-Higgins Prize (test of time award) in 2021, a NeurIPS Outstanding Paper Award in 2021, the AAAI Outstanding Paper Award in 2020, the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, IEEE AI's 10 to Watch in 2016, and the ICCV Marr Prize (best paper award) in 2013. She received her Ph.D. in Computer Science at Cornell University and BS in Computer Science and Engineering at Seoul National University in Korea.

Presentations

Selective “Selective Prediction”: Reducing Unnecessary Abstention in Vision-Language Reasoning

Tejas Srinivasan and 6 other authors

Agent Lumos: Unified and Modular Training for Open-Source Language Agents

Da Yin and 6 other authors

Selective "Selective Prediction": Reducing Unnecessary Abstention in Vision-Language Reasoning

Tejas Srinivasan and 6 other authors

CULTURE-GEN: Natural Language Prompts Reveal Uneven Culture Presence in Language Models

Huihan Li and 4 other authors

JAMDEC: Unsupervised Authorship Obfuscation using Constrained Decoding over Small Language Models

Jillian Fisher and 5 other authors

MacGyver: Are Large Language Models Creative Problem Solvers?

Yufei Tian and 8 other authors

Impossible Distillation for Paraphrasing and Summarization: How to Make High-quality Lemonade out of Small, Low-quality Model

Jaehun Jung and 7 other authors

Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models

Natalie Shapira and 7 other authors

Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties

Taylor Sorensen and 12 other authors

Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning | VIDEO

Ximing Lu and 16 other authors

BotPercent: Estimating Bot Populations in Twitter Communities

Zhaoxuan Tan and 6 other authors

FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions

Hyunwoo Kim and 6 other authors

Reading Books is Great, But Not if You Are Driving! Visually Grounded Reasoning about Defeasible Commonsense Norms | VIDEO

Seungju Han and 7 other authors

Crystal: Introspective Reasoners Reinforced with Self-Feedback | VIDEO

Jiacheng Liu and 4 other authors

Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements

Jiacheng Liu and 5 other authors

SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization

Hyunwoo Kim and 11 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved