profile picture

Eric Mitchell

Stanford University

question-answering

meta-learning

nli

knowledge

consistency

calibration

online learning

adaptation

logical inference

programming language

llm

factor-graph

max-sat

rlhf

verbalization

4

presentations

9

number of views

SHORT BIO

I am a fourth-year PhD student in Stanford’s CS department, where I’m fortunate to be advised by Chelsea Finn and Christopher D. Manning. The goal of my research is to make the knowledge embedded in neural networks more reusable and updatable in an ever-changing world. I’m interested in deep learning generally, as well as meta-learning and continual learning more specifically, particularly in the context large language models (or ‘Foundation Models’).

Presentations

Calibrating Language Models with Adaptive Temperature Scaling

Johnathan Xie and 4 other authors

Meta-Learning Online Adaptation of Language Models | VIDEO

Nathan Zixia Hu and 3 other authors

Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback

Katherine Tian and 7 other authors

Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference

Eric Mitchell and 7 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved