profile picture

Zexuan Zhong

Princeton University

retrieval-based language models

model compression

machine translation

adversarial attack

bert

privacy

multi-hop question answering

pre-training

dense retrieval

retrieval

language modeling

large language models

efficiency

masking

structured pruning

7

presentations

12

number of views

SHORT BIO

Zexuan Zhong is a Ph.D. student in the Department of Computer Science at Prince- ton University, advised by Prof. Danqi Chen. His research interests lie in natural language processing and machine learning. He received a J.P. Morgan PhD Fellowship in 2022.

Presentations

Poisoning Retrieval Corpora by Injecting Adversarial Passages

Zexuan Zhong and 3 other authors

MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions

Zexuan Zhong and 4 other authors

Privacy Implications of Retrieval-Based Language Models

Yangsibo Huang and 4 other authors

Should You Mask 15% in Masked Language Modeling?

Alexander Wettig and 3 other authors

Training Language Models with Memory Augmentation

Zexuan Zhong and 2 other authors

Structured Pruning Learns Compact and Accurate Models

Mengzhou Xia and 2 other authors

REST: Retrieval-Based Speculative Decoding

Zhenyu He and 4 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved