profile picture

Dacheng Tao

llm

knowledge distillation

reinforcement learning algorithms

data augmentation

large language model

in-context learning

grammatical error correction

speech translation

evaluation

neural machine translation

text classification

script

benchmark

efficient training

reasoning

27

presentations

7

number of views

Presentations

Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Large Language Models

Liang Ding and 7 other authors

Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning

Liang Ding and 5 other authors

Modeling All Response Surfaces in One for Conditional Search Spaces

Jiaxing Li and 6 other authors

LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit

Ruihao Gong and 7 other authors

Self-Powered LLM Modality Expansion for Large Speech-Text Models

Tengfei Yu and 5 other authors

Revisiting Knowledge Distillation for Autoregressive Language Models

Qihuang Zhong and 5 other authors

Speech Sense Disambiguation: Tackling Homophone Ambiguity in End-to-End Speech Translation

Tengfei Yu and 5 other authors

Uncertainty Aware Learning for Language Model Alignment

Yikun Wang and 5 other authors

Revisiting Demonstration Selection Strategies in In-Context Learning

Keqin Peng and 6 other authors

SimDistill: Simulated Multi-Modal Distillation for BEV 3D Object Detection

Haimei Zhao and 5 other authors

TD²-Net: Toward Denoising and Debiasing for Video Scene Graph Generation

Xin Lin and 5 other authors

Multi-Step Denoising Scheduled Sampling: Towards Alleviating Exposure Bias for Diffusion Models

Zhiyao Ren and 6 other authors

Self-Evolution Learning for Discriminative Language Model Pretraining

Qihuang Zhong and 4 other authors

Token-Level Self-Evolution Training for Sequence-to-Sequence Learning

Keqin Peng and 6 other authors

TransGEC: Improving Grammatical Error Correction with Translationese

Tao Fang and 7 other authors

Revisiting Token Dropping Strategy in Efficient BERT Pretraining

Qihuang Zhong and 6 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved