Zhiting Hu
text generation
world model
language model
natural language understanding
reasoning
knowledge graph
zero-shot learning
data-to-text
few-shot learning
weak supervision
pretrained language models
natural language generation
interpretability
summarization
low-resource
14
presentations
13
number of views
SHORT BIO
Zhiting Hu is an Assistant Professor in Halicioglu Data Science Institute at UC San Diego. He received his Bachelor's degree in Computer Science from Peking University in 2014, and his Ph.D. in Machine Learning from Carnegie Mellon University in 2020. His research interests lie in the broad area of machine learning, artificial intelligence, natural language processing, and ML systems. In particular, He is interested in principles, methodologies, and systems of training AI agents with all types of experiences (data, symbolic knowledge, rewards, adversaries, lifelong interplay, etc), and their applications in controllable text generation, healthcare, and other application domains. His research was recognized with best demo nomination at ACL2019 and outstanding paper award at ACL2016.
Presentations
Do Vision-Language Models Have Internal World Models? Towards an Atomic Evaluation
Qiyue Gao and 23 other authors
UOUO: Uncontextualized Uncommon Objects for Measuring Knowledge Horizons of Vision Language Models
Xinyu Pi and 7 other authors
MMToM-QA: Multimodal Theory of Mind Question Answering
Chuanyang Jin and 9 other authors
RedCoast: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs
Bowen Tan and 7 other authors

Composable Text Controls in Latent Space with ODEs
Guangyi Liu and 9 other authors

Reasoning with Language Model is Planning with World Model
Shibo Hao and 6 other authors

BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from Pretrained Language Models
Shibo Hao and 7 other authors
AlignScore: Evaluating Factual Consistency with A Unified Alignment Function
Yuheng Zha and 3 other authors

RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
Mingkai Deng and 8 other authors
Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation
Mingkai Deng and 4 other authors

Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation
Guangyi Liu and 1 other author
Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation
Mingkai Deng and 4 other authors

Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation
Mingkai Deng and 4 other authors

ASDOT: Any-Shot Data-to-Text Generation with Pretrained Language Models
Jiannan Xiang and 4 other authors