
Wenlin Yao
interpretability
large language models
dialogue understanding
language model
explanation
pretraining
open-domain question answering
web navigation
sentence embedding
evaluation metric
dialogue summarization
confidence calibration
performance analysis
chart understanding
dataset
10
presentations
SHORT BIO
I am a Senior Research Scientist at Tencent AI Lab (Seattle). I earned my Ph.D. from the Department of Computer Science and Engineering at Texas A&M University (TAMU) under the supervision of Prof. Ruihong Huang in 2020. I got my Bachelor’s degree from Dalian University of Technology (DUT) in 2015. My research interests include Weakly Supervised Semantics, Low-resource Learning, and General Pre-trained Models.
Presentations

When Reasoning Meets Information Aggregation: A Case Study with Sports Narratives
Yebowen Hu and 7 other authors

InFoBench: Evaluating Instruction Following Ability in Large Language Models
Yiwei Qin and 9 other authors

Fact-and-Reflection (FaR) Improves Confidence Calibration of Large Language Models
Xinran Zhao and 6 other authors

WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models
Hongliang He and 7 other authors

From Language Modeling to Instruction Following: Understanding the Behavior Shift in LLMs after Instruction Tuning
Xuansheng Wu and 6 other authors

MMC: Advancing Multimodal Chart Understanding with Large-scale Instruction Tuning
Fuxiao Liu and 7 other authors

Bridging Continuous and Discrete Spaces: Interpretable Sentence Representation Learning via Compositional Operations
James Y. Huang and 5 other authors

How do Words Contribute to Sentence Semantics? Revisiting Sentence Embeddings with a Perturbation Method
Wenlin Yao and 7 other authors

C-MORE: Pretraining to Answer Open-Domain Questions by Consulting Millions of References
Xiang Yue and 5 other authors

Learning-by-Narrating: Narrative Pre-Training for Zero-Shot Dialogue Comprehension
Chao Zhao and 5 other authors