
Liang Ding
llm
non-autoregressive translation
knowledge distillation
knowledge graphs
speech translation
script
multi-hop reasoning
low-resource languages
unsupervised neural machine translation
uncertainty
hallucination
alignment
progressive multi-granularity training
in-context learning
autoregressive language model
11
presentations
6
number of views
SHORT BIO
Liang Ding received Ph.D. from the University of Sydney, supervised by Prof. Dacheng Tao. He is currently an algorithm scientist with JD Explore Academy. He works on deep learning for NLP, including language model pretraining, language understanding, generation, and translation. He published over 20 research papers at prestigious conferences in natural language processing and artificial intelligence, including ICLR, ACL, EMNLP, NAACL, COLING, and SIGIR, and importantly, some of his works were successfully applied to industry, e.g. Baidu DuerOS. He has more than 10 patents filed or granted. Liang served as Area Chair and Session Chair for ACL 2022 and SDM 2021. He served as the Program Committee for top conferences, e.g. ACL, EMNLP, NAACL, NeurIPS, and Reviewer for top journals, e.g. Computational Linguistics, Knowledge-Based Systems, and Neurocomputing. He won many AI challenges, including IWSLT 2021, WMT 2020, and WMT 2019. Liang led the team to be the first to outperform human performance (in Dec. 2021) on two challenging tasks and then got first place (in Jan. 2022) with an average score of 91.3 on the GLUE benchmark.
Presentations

Self-Powered LLM Modality Expansion for Large Speech-Text Models
Tengfei Yu and 5 other authors

Context-aware Watermark with Semantic Balanced Green-red Lists for Large Language Models
Yuxuan Guo and 5 other authors

POMP: Probability-driven Meta-graph Prompter for LLMs in Low-resource Unsupervised Neural Machine Translation
Shilong Pan and 6 other authors

Revisiting Knowledge Distillation for Autoregressive Language Models
Qihuang Zhong and 5 other authors

Speech Sense Disambiguation: Tackling Homophone Ambiguity in End-to-End Speech Translation
Tengfei Yu and 5 other authors

Mitigating Hallucinations in Large Vision-Language Models with Instruction Contrastive Decoding
Xintong Wang and 3 other authors

Uncertainty Aware Learning for Language Model Alignment
Yikun Wang and 5 other authors

Revisiting Demonstration Selection Strategies in In-Context Learning
Keqin Peng and 6 other authors

Progressive Multi-Granularity Training for Non-Autoregressive Translation
Liang Ding

Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in Non-Autoregressive Translation
Liang Ding

Context-Aware Cross-Attention for Non-Autoregressive Translation
Liang Ding