Shuohuan Wang
large language models
continual learning
language modeling
ensemble
multi-task
efficient inference
mixture-of-experts
long-document, retrospective, recurrence mechanism
spelling error correction
phonetic features
ernie
large language model
democracy
data-efficient
rlhf
12
presentations
10
number of views
SHORT BIO
Staff research and development engineer at Baidu
Presentations
HFT: Half Fine-Tuning for Large Language Models
Tingfeng Hui and 5 other authors
Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging
Tingfeng Hui and 5 other authors

Curiosity-Driven Reinforcement Learning from Human Feedback
Haoran Sun and 5 other authors
BeamLoRA: Beam-Constraint Low-Rank Adaptation
Naibin Gu and 9 other authors
Inner Thinking Transformer: Leveraging Dynamic Depth Scaling to Foster Adaptive Internal Thinking
Yilong Chen and 9 other authors
NACL: A General and Effective KV Cache Eviction Framework for LLM at Inference Time
Yilong Chen and 9 other authors

ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora
Shuohuan Wang and 6 other authors

Correcting Chinese Spelling Errors with Phonetic Pre-training
Ruiqing Zhang and 7 other authors
abcbpc at SemEval-2021 Task 7: ERNIE-based Multi-Task Model for Detecting and Rating Humor and Offense
Chao Pang and 8 other authors

Multi-lingual Learning for Offensive Language Identification using Pre-trained Language Models
Shuohuan Wang and 3 other authors

On Training Data Influence of GPT Models
Yekun Chai and 5 other authors
Dual Modalities of Text: Visual and Textual Generative Pre-Training
Yekun Chai and 5 other authors