
Dacheng Tao
knowledge distillation
llm
reinforcement learning algorithms
data augmentation
script
efficient training
evaluation
neural machine translation
benchmark
speech translation
text classification
grammatical error correction
fairness
dialogue generation
generalization
24
presentations
7
number of views
Presentations

LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit
Ruihao Gong and 7 other authors

Self-Powered LLM Modality Expansion for Large Speech-Text Models
Tengfei Yu and 5 other authors

Revisiting Knowledge Distillation for Autoregressive Language Models
Qihuang Zhong and 5 other authors

Speech Sense Disambiguation: Tackling Homophone Ambiguity in End-to-End Speech Translation
Tengfei Yu and 5 other authors

Uncertainty Aware Learning for Language Model Alignment
Yikun Wang and 5 other authors

Revisiting Demonstration Selection Strategies in In-Context Learning
Keqin Peng and 6 other authors

SimDistill: Simulated Multi-Modal Distillation for BEV 3D Object Detection
Haimei Zhao and 5 other authors

TD²-Net: Toward Denoising and Debiasing for Video Scene Graph Generation
Xin Lin and 5 other authors

Multi-Step Denoising Scheduled Sampling: Towards Alleviating Exposure Bias for Diffusion Models
Zhiyao Ren and 6 other authors

Self-Evolution Learning for Discriminative Language Model Pretraining
Qihuang Zhong and 4 other authors

Token-Level Self-Evolution Training for Sequence-to-Sequence Learning
Keqin Peng and 6 other authors

TransGEC: Improving Grammatical Error Correction with Translationese
Tao Fang and 7 other authors

Revisiting Token Dropping Strategy in Efficient BERT Pretraining
Qihuang Zhong and 6 other authors

Improving Simultaneous Machine Translation with Monolingual Data
Hexuan Deng and 5 other authors

Learning to Learn Better for Video Object Segmentation
Lefei Zhang and 3 other authors

DPText-DETR: Towards Better Scene Text Detection with Dynamic Points in Transformer
Juhua Liu and 5 other authors