
Zhihan Zhang
Ph.D. Student @ University of Notre Dame
large language models
benchmark
evaluation
fine-tuning
open domain question answering
question answering
information retrieval
pre-trained language model
nlp
multilingual language models
generation
multi-task learning
prompting
open-domain question answering
language models
11
presentations
14
number of views
SHORT BIO
Zhihan is a third-year PhD student from the DM2 lab at the University of Notre Dame, under the supervision of Dr. Meng Jiang. His recent research mainly focuses on large language models and instruction tuning, and he also has research experience in knowledge-augmented NLP and text retrieval. He has published multiple papers at top-tier NLP venues like ACL, EMNLP and TACL.
Presentations

TOWER: Tree Organized Weighting for Evaluating Complex Instructions
Noah Ziems and 2 other authors

Large Language Models Can Self-Correct with Key Condition Verification
Zhenyu Wu and 5 other authors

Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning
Zhihan Zhang and 7 other authors

PLUG: Leveraging Pivot Language in Cross-Lingual Instruction Tuning
Zhihan Zhang and 6 other authors

Pre-training Language Models for Comparative Reasoning
Mengxia Yu and 3 other authors

Auto-Instruct: Automatic Instruction Generation and Ranking for Black-Box Language Models
Zhihan Zhang and 8 other authors

Exploring Contrast Consistency of Open-Domain Question Answering Systems on Minimally Edited Questions | VIDEO
Zhihan Zhang and 4 other authors

Large Language Models are Built-in Autoregressive Search Engines
Noah Ziems and 3 other authors

A Survey of Multi-task Learning in Natural Language Processing: Regarding Task Relatedness and Training Methods
Zhihan Zhang and 4 other authors

A Unified Encoder-Decoder Framework with Entity Memory
Zhihan Zhang

Diversifying Content Generation for Commonsense Reasoning with Mixture of Knowledge Graph Experts
Wenhao Yu and 5 other authors