
Linyang Li
evaluation
llm
usersimulator
task-oriented-dialogue
reasoning
dialogue systems
robustness
autoregressive model
generation
hallucination
alignment
large language model
belief revision
pretraining models
adversarial defense
14
presentations
5
number of views
SHORT BIO
I'm Linyang Li, PHD student in Fudan University, advised by Prof. Xipeng Qiu. My research focuses on adversarial robustness studies in LLMs
Presentations

InferAligner: Inference-Time Alignment for Harmlessness through Cross-Model Guidance
Pengyu Wang and 8 other authors

Turn Waste into Worth: Rectifying Top-$k$ Router of MoE
Zhiyuan Zeng and 9 other authors

AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling
Jun Zhan and 15 other authors

LLatrieval: LLM-Verified Retrieval for Verifiable Generation
Xiaonan Li and 5 other authors

SeqXGPT: Sentence-Level AI-Generated Text Detection
Pengyu Wang and 5 other authors

Character-LLM: A Trainable Agent for Role-Playing
Yunfan Shao and 3 other authors

Text Adversarial Purification as Defense against Adversarial Attacks
Linyang Li

Mitigating Negative Style Transfer in Hybrid Dialogue System
Shimin Li and 3 other authors

Is MultiWOZ a Solved Task? An Interactive TOD Evaluation Framework with User Simulator
Qinyuan Cheng and 5 other authors

"Is Whole Word Masking Always Better for Chinese BERT?": Probing on Chinese Grammatical Error Correction
Yong Dai and 7 other authors

Towards Adversarially Robust Text Classifiers by Learning to Reweight Clean Examples
Jianhan Xu and 6 other authors

Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution
Zongyi Li and 7 other authors

Token-Aware Virtual Adversarial Training in Natural Language Understanding
Linyang Li and 1 other author

Is MultiWOZ a Solved Task? An Interactive TOD Evaluation Framework with User Simulator
Qinyuan Cheng and 5 other authors