
Eric Mitchell
Stanford University
question-answering
meta-learning
nli
knowledge
consistency
calibration
online learning
adaptation
logical inference
programming language
llm
factor-graph
max-sat
rlhf
verbalization
4
presentations
9
number of views
SHORT BIO
I am a fourth-year PhD student in Stanford’s CS department, where I’m fortunate to be advised by Chelsea Finn and Christopher D. Manning. The goal of my research is to make the knowledge embedded in neural networks more reusable and updatable in an ever-changing world. I’m interested in deep learning generally, as well as meta-learning and continual learning more specifically, particularly in the context large language models (or ‘Foundation Models’).
Presentations

Calibrating Language Models with Adaptive Temperature Scaling
Johnathan Xie and 4 other authors

Meta-Learning Online Adaptation of Language Models | VIDEO
Nathan Zixia Hu and 3 other authors

Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback
Katherine Tian and 7 other authors

Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference
Eric Mitchell and 7 other authors