
Boxi Cao
Under-graduate student @ Institute of Software, Chinese Academy of Sciences
zero-shot learning
few-shot learning
computational social science
language model
relation extraction
pretrained language model
evaluation bias
knowledge injection
alignment
large language model
factual knowledge
prompt-based probing
causal model
instructing fine-tuning
4
presentations
4
number of views
SHORT BIO
I am a Ph.D. Candidate (from 2019.09) in the Chinese Information Processing Laboratory at the Institute of Software, Chinese Academy of Sciences, under the Supervision of Professor Xianpei Han and Professor Le Sun. I received my Bachelor degree in Beijing University of Posts and Telecommunications in June 2019.
Presentations

Not All Contexts Are Equal: Teaching LLMs Credibility-aware Generation
Ruotong Pan and 7 other authors

Learning or Self-aligning? Rethinking Instruction Fine-tuning
Mengjie Ren and 8 other authors

Does the Correctness of Factual Knowledge Matter for Factual Knowledge-Enhanced Pre-trained Language Models?
Boxi Cao and 4 other authors

Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View
Boxi Cao and 4 other authors