
Fei Huang
fine-tuning
pre-training
benchmarking
language modeling
prompting
pre-training model
non-autoregressive text generation
efficient text generation
datasets for low resource languages
2
presentations
SHORT BIO
Fei Huang is a PhD candidate in the Department of Computer Science at Tsinghua University. His research focuses on natural language generation, particularly non-autoregressive text generation models. He has authored 10 papers that have been published in top-tier conferences or journals such as ICML, ACL, EMNLP, and TACL.
Presentations

Predicting Rewards Alongside Tokens: Non-disruptive Parameter Insertion for Efficient Inference Intervention in Large Language Model
Chenhan Yuan and 6 other authors

Directed Acyclic Transformer Pre-training for High-quality Non-autoregressive Text Generation
Fei Huang and 2 other authors