
Pei Ke
large language model
evaluation
alignment
pre-trained language model
natural language generation
pre-trained language models
evaluation metric
diversity
defense
knowledge-graph-to-text generation
toxicity
pre-training model
llms
text evaluation
snlp
11
presentations
6
number of views
SHORT BIO
I’m a postdoctoral researcher in Conversational AI Group, Department of Computer Science and Technology, Tsinghua University, collaborated with Prof. Minlie Huang. Before that, I received my Ph.D. degree in Computer Science and Technology from Tsinghua University, advised by Prof. Xiaoyan Zhu and Prof. Minlie Huang. My research interests include natural language generation, dialogue system, and sentiment analysis.
Presentations

CritiqueLLM: Towards an Informative Critique Generation Model for Evaluation of Large Language Model Generation
Pei Ke and 11 other authors

Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization
Zhexin Zhang and 5 other authors

Black-Box Prompt Optimization: Aligning Large Language Models without Model Training
Jiale Cheng and 7 other authors

Learning Task Decomposition to Assist Humans in Competitive Programming
Jiaxin Wen and 5 other authors

Unveiling the Implicit Toxicity in Large Language Models
Jiaxin Wen and 6 other authors

DecompEval: Evaluating Generated Texts as Unsupervised Decomposed Question Answering
Pei Ke and 6 other authors

Directed Acyclic Transformer Pre-training for High-quality Non-autoregressive Text Generation
Fei Huang and 2 other authors

Learning Instructions with Unlabeled Data for Zero-Shot Cross-Task Generalization
Yuxian Gu and 3 other authors

CTRLEval: An Unsupervised Reference-Free Metric for Evaluating Controlled Text Generation
Pei Ke and 6 other authors

Rethinking and Refining the Distinct Metric
Siyang Liu and 5 other authors

JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs
Pei Ke and 7 other authors