profile picture

Percy Liang

Stanford University

language model

domain adaptation

text generation

question answering

summarization

contrastive

natural language generation

pretraining

bionlp

graph

lightweight fine-tuning

prefix-tuning

soft prompting

decoding

llms

8

presentations

65

number of views

SHORT BIO

Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His two research goals are (i) to make machine learning more robust, fair, and interpretable; and (ii) to make computers easier to communicate with through natural language. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).

Presentations

Benchmarking Large Language Models for News Summarization

Tianyi Zhang and 5 other authors

Contrastive Decoding: Open-ended Text Generation as Optimization

Xiang Lisa Li and 7 other authors

LinkBERT: Pretraining Language Models with Document Links

Michihiro Yasunaga and 2 other authors

Conditional probing: measuring usable information beyond a baseline

John Hewitt and 3 other authors

Conditional probing: measuring usable information beyond a baseline

John Hewitt and 3 other authors

Prefix-Tuning: Optimizing Continuous Prompts for Generation

Xiang Lisa Li and 1 other author

Swords: A Benchmark for Lexical Substitution with Improved Data Coverage and Quality

Mina Lee and 4 other authors

QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering

Michihiro Yasunaga and 4 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved