profile picture

Ruiqi Zhong

language models

meta-learning

few-shot learning

code generation

alignment

pre-training; analysis; statistics

in-context learning

language model prompting

scalable oversight

4

presentations

28

number of views

SHORT BIO

My name is Ruiqi Zhong. I am currently a 5th year PhD student in the UC Berkeley EECS department, advised by Prof. Jacob Steinhardt and Prof. Dan Klein.

Presentations

Learning Task Decomposition to Assist Humans in Competitive Programming

Jiaxin Wen and 5 other authors

Meta-learning via Language Model In-context Tuning

Yanda Chen and 4 other authors

Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections

Ruiqi Zhong and 3 other authors

Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level

Ruiqi Zhong and 3 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved