profile picture

Zixuan Ke

Researcher @ UIC / Salesforce Research

language model

continual learning

information retrieval

imbalanced learning

retrieval-augmented generation

large language model

domain-adaptive pre-training

domain-aware learning

general knowledge preservation

transfer learning

domain-adaptive pretraining

7

presentations

11

number of views

SHORT BIO

I am a final-year Ph.D. student at the University of Illinois, Chicago. My research studies how to make the knowledge in foundation models, particularly large language models (LLMs), more reusable and updatable. This includes (but not limited to) (1) Large Language Model (pre-training, post-training and frontiers, e.g., retrieval-augmented LLM) - presented at ICLR23, EMNLP22a EMNLP22b; (2) Continual and Lifelong Learning (task, class, and domain-incremental) - presented at ICML23, NeurIPS20,21,22; (3) Natural Language Processing (classification, generation and extraction) - presented at EMNLP23, NAACL21, EMNLP21; (4) Argument Mining - presented at ACL18,19; IJCAI18,19.

Presentations

Continual Training of Language Models for Few-Shot Learning

Zixuan Ke

Adapting a Language Model While Preserving its General Knowledge

Zixuan Ke

CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks

Zixuan Ke and 3 other authors

CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks

Zixuan Ke and 3 other authors

Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment Classification Tasks

Zixuan Ke and 2 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved