profile picture

Shizhe Diao

PhD @ Hong Kong University of Science and Technology

large language models

llm

large language model

domain adaptation

fine-tuning

text generation

information extraction

reasoning

commonsense

natural language processing

generative adversarial networks

domain knowledge

evaluation framework

pre-training

machine translation

13

presentations

17

number of views

SHORT BIO

Shizhe is a research scientist at NVIDIA Research. He completed his Ph.D. at the Hong Kong University of Science and Technology, advised by Professor Tong Zhang. He is passionate about the research in pre-training, efficient-tuning, and alignment of large foundation models.

Presentations

Active Prompting with Chain-of-Thought for Large Language Models

Shizhe Diao and 5 other authors

Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards

Haoxiang Wang and 7 other authors

Plum: Prompt Learning using Metaheuristics

Rui Pan and 8 other authors

VeraCT Scan: Retrieval-Augmented Fake News Detection with Justifiable Reasoning

Cheng Niu and 9 other authors

R-Tuning: Instructing Large Language Models to Say ‘I Don’t Know’

Hanning Zhang and 8 other authors

LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models

Shizhe Diao and 6 other authors

ConstraintChecker: A Plugin for Large Language Models to Reason on Commonsense Knowledge Bases

Quyet V. Do and 4 other authors

Doolittle: Benchmarks and Corpora for Academic Writing Formalization

Shizhe Diao and 7 other authors

Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data

KaShun Shum and 2 other authors

DetGPT: Detect What You Need via Reasoning | VIDEO

Renjie Pi and 10 other authors

Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models' Memories

Shizhe Diao and 4 other authors

Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adaptation

Shizhe Diao

TILGAN: Transformer-based Implicit Latent GAN for Diverse and Coherent Text Generation

Shizhe Diao

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved