
Tatsunori Hashimoto
Assistant Professor @ Stanford
summarization
hallucination
bias
fine-tuning
text generation
education
conversation
contrastive
natural language generation
language model
decoding
uptake
llms
text generation evaluation
spurious correlates
12
presentations
23
number of views
Presentations

Removing RLHF Protections in GPT-4 via Fine-Tuning
Qiusi Zhan and 5 other authors

Benchmarking Large Language Models for News Summarization
Tianyi Zhang and 5 other authors

Navigating the Grey Area: How Expressions of Uncertainty and Overconfidence Affect Language Models | VIDEO
Kaitlyn Zhou and 2 other authors

Understanding generalization for instruction following and black-box language models | VIDEO
Tatsunori Hashimoto

The first workshop on generalisation (benchmarking) in NLP Panel
Tatsunori Hashimoto and 2 other authors

Contrastive Decoding: Open-ended Text Generation as Optimization
Xiang Lisa Li and 7 other authors

Contrastive Error Attribution for Finetuned Language Models
Faisal Ladhak and 2 other authors

When Do Pre-Training Biases Propagate to Downstream Tasks? A Case Study in Text Summarization
Faisal Ladhak and 6 other authors

When Do Pre-Training Biases Propagate to Downstream Tasks? A Case Study in Text Summarization
Faisal Ladhak and 6 other authors

Spurious Correlations in Reference-Free Evaluation of Text Generation
Esin Durmus and 2 other authors

Measuring Conversational Uptake: A Case Study on Student-Teacher Interactions
Dorottya Demszky and 6 other authors

On the Inductive Bias of Masked Language Modeling: From Statistical to Syntactic Dependencies
Tianyi Zhang and 1 other author