
Yichun Yin
Huawei Noah’s Ark Lab
automated theorem proving
language model
dialogue system
generative language models
hyper-parameter optimization
efficient bert
autotinybert
information seeking
large language model
aspect term extraction; positional dependency-based word embedding; triple embedding
weight reusing; pre-trained language model; efficient pre-training
learning & optimization for snlp
user simulator
proactive dialogue generation
tod systems
9
presentations
8
number of views
SHORT BIO
Yichun Yin received PhD degree in Computer Science from Peking University, China, in 2018. Currently, he is a researcher of Huawei Noah’s Ark Lab. His research interests include deep learning and natural language processing.
Presentations

Preparing Lessons for Progressive Training on Language Models
Yu Pan and 8 other authors

TRIGO: Benchmarking Formal Mathematical Proof Reduction for Generative Language Models
Jing Xiong and 13 other authors

DT-Solver: Automated Theorem Proving with Dynamic-Tree Sampling Guided by Proof-level Value Function
Haiming Wang and 12 other authors

AutoConv: Automatically Generating Information-seeking Conversations with Large Language Models
Siheng Li and 8 other authors

One Cannot Stand for Everyone! Leveraging Multiple User Simulators\\ to train Task-oriented Dialogue Systems
Yajiao LIU and 7 other authors

NewsDialogues: Towards Proactive News Grounded Conversation
Siheng Li and 9 other authors

Generate & Rank: A Multi-task Framework for Math Word Problems
Jianhao Shen and 6 other authors

AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models
Yichun Yin and 5 other authors

PoD: Positional Dependency-Based Word Embedding for Aspect Term Extraction
Yichun Yin