
Jiaming Shen
information extraction
language models
constituency parsing
unsupervised parsing
large language models
relation extraction
electra
text ranking
explanation
document-level relation extraction
pretrained text encoder
language pretraining
large language model
learning-to-rank
search relevance
5
presentations
6
number of views
SHORT BIO
Jiaming Shen is a Ph.D. candidate in the Department of Computer Science, University of Illinois at Urbana-Champaign where he works with Prof. Jiawei Han and Prof. Heng Ji. His research, focusing on unleashing hidden knowledge in unstructured text, lies in the intersection of data mining and natural language processing. Specifically, he proposes a data-driven framework to progressively construct, enrich, and apply taxonomies to empower knowledge-centric applications. He has published multiple papers in top-tier venues (e.g., KDD, WebConf, ACL, EMNLP, SIGIR, etc) and collaborated with industrial and governmental research labs (e.g., Microsoft Research, Google Research, Army Research Lab, etc) for technology transitions. Jiaming has been awarded several fellowships and scholarships, including a Brian Totty Graduate Fellowship and a Yunni & Maxine Pao Memorial Fellowship. More information is available on his personal website: https://mickeystroller.github.io/
Presentations

Explanation-aware Soft Ensemble Empowers Large Language Model In-context Learning
Yue Yu and 7 other authors

Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting
Zhen Qin and 11 other authors

Eider: Empowering Document-level Relation Extraction with Efficient Evidence Extraction and Inference-stage Fusion
Yiqing Xie and 4 other authors

Training ELECTRA Augmented with Multi-word Selection
Jiaming Shen

TaxoClass: Hierarchical Multi-Label Text Classification Using Only Class Names
Jiaming Shen