profile picture

Ze-Feng Gao

model compression

pre-trained language model

natural language processing

decomposition

pre-trained language models

tensor decomposition

quantization

outliers

lightweight fine-tuning

mpo

matrix decomposition

mixture-of-expert

over-parameterization

4

presentations

12

number of views

SHORT BIO

Ze-Feng Gao is a Postdoctoral Researcher at the Gaoling School of Artificial Intelligence, Renmin University of China. His research focuses on natural language processing (NLP), with a particular interest in parameter-efficient utilization of large language models, such as parameter-efficient fine-tuning and model compression.

Presentations

Unlocking Data-free Low-bit Quantization with Matrix Decomposition for KV Cache Compression

Peiyu Liu and 5 other authors

Small Pre-trained Language Models Can be Fine-tuned as Large Models via Over-Parameterization

Ze-Feng Gao and 4 other authors

Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models

Ze-Feng Gao and 1 other author

Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators

Peiyu Liu and 1 other author

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved