
Yilin Shen
model compression
intent classification
large language models
out-of-domain detection
efficient inference
domain-regularized module
vision transformer
structured pruning
svd
svd compression transformer
efficient language models
efficient large language models
multi-token prediction
7
presentations
30
number of views
SHORT BIO
Yilin Shen is a research scientist in AI Center at Samsung Research America (SRA). He received Ph.D. in computer science from the University of Florida. His current research interests include various artificial intelligence areas such as natural language processing, multimodal learning, on-device AI, etc. He has published 80+ papers and filed 25+ US patents across multiple disciplines, including artificial intelligence, data mining, privacy & security, complex networks. He has received many awards, including ACL Best Demo Nomination, CIKM Best Paper Award Runner Up, the first place of Linguistics Meets Image and Video Retrieval challenge in ICCV conference and Samsung Best Paper Awards.
Presentations

DynaMo: Accelerating Language Model Inference with Dynamic Multi-Token Sampling
Shikhar Tuli and 5 other authors

Adaptive Rank Selections for Low-Rank Approximation of Language Models
Shangqian Gao and 4 other authors

GOHSP: A Unified Framework of Graph and Optimization-based Heterogeneous Structured Pruning for Vision Transformer
Burak Uzkent and 4 other authors

Numerical Optimizations for Weighted Low-rank Estimation on Language Models
Ting Hua and 5 other authors

Improving Zero-Shot Phrase Grounding via Reasoning on External Knowledge and Spatial Relations
Zhan Shi and 3 other authors

Enhancing the generalization for Intent Classification and Out-of-Domain Detection in SLU
Yilin Shen and 3 other authors

Hyperparameter-free Continuous Learning for Domain Classification in Natural Language Understanding
Ting Hua and 4 other authors