
Hongyin Luo
fairness
zero-shot learning
meta learning
question answering
bias
large language models
knowledge distillation
natural language processing
low resource
entailment
task-oriented
language modeling
natural language understanding
pretraining
information extraction
9
presentations
8
number of views
SHORT BIO
Hongyin Luo is a postdoctoral associate at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He received a bachelor’s degree from Tsinghua University in 2016 and obtained a Ph.D. degree in computer science in 2022 at MIT EECS. His research focuses on improving the efficiency, transparency, and reasoning ability of language models. His latest research has combined natural language with different formal reasoning engines, including entailment models and program interpreters. He has built small language models outperforming GPT3-175B with 1/500 computation, self-denoising language models that handles noises of search engines, and natural language embedded programs that achieves accurate reasoning without task-specific examples.
Presentations

Adaptive Query Rewriting: Aligning Rewriters through Marginal Probability of Conversational Answers
Tianhua Zhang and 5 other authors

Self-Specialization: Uncovering Latent Expertise within Large Language Models
Junmo Kang and 8 other authors

Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning
Tianhua Zhang and 9 other authors

Entailment as Robust Self-Learner
Hongyin Luo

Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning
Hongyin Luo and 1 other author

Cooperative Self-training of Machine Reading Comprehension
Hongyin Luo

DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
Yung-Sung Chuang and 9 other authors

Meta-learning for downstream aware and agnostic pretraining
Hongyin Luo

Mitigating Biases in Toxic Language Detection through Invariant Rationalization
Yung-Sung Chuang and 6 other authors