
Ning Ding
Tsinghua University
dataset
prompt tuning
parameter-efficient
language models
knowledge bases
nlp
ner
pretrained language models
contrastive
large language models
prompting
knowledge transfer
data
event relation extraction
few-shot learning
13
presentations
25
number of views
SHORT BIO
Ning Ding is a Ph.D. student at Tsinghua University, studying machine learning and natural language processing. His research has been published at ICLR, ACL, and EMNLP, etc. He is a recipient of the Baidu Ph.D. Fellowship and China National Scholarship.
Presentations

Exploring the Impact of Model Scaling on Parameter-Efficient Tuning | VIDEO
Sheng Su and 11 other authors

Sparse Low-rank Adaptation of Pre-trained Language Models
Ning Ding and 6 other authors

Enhancing Chat Language Models by Scaling High-quality Instructional Conversations | VIDEO
Ning Ding and 7 other authors

CRaSh: Clustering, Removing, and Sharing Enhance Fine-tuning without Full Large Language Model
Kaiyan Zhang and 5 other authors

Parameter-efficient Weight Ensembling Facilitates Task-level Knowledge Transfer
Xingtai Lv and 1 other author

Exploring Lottery Prompts for Pre-trained Language Models
Yulin Chen and 6 other authors

MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction
Xiaozhi Wang and 11 other authors

ProQA: Structural Prompt-based Pre-training for Unified Question Answering
Yifan Gao and 8 other authors

Prototypical Verbalizer for Prompt-based Few-shot Tuning
Ganqu Cui and 4 other authors

Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification
Shengding Hu and 7 other authors

OpenPrompt: An Open-source Framework for Prompt-learning
Ning Ding and 6 other authors

CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding
Dong Wang and 1 other author

Few-NERD: A Few-shot Named Entity Recognition Dataset
Ning Ding