profile picture

Haoran Yang

Post-graduate student @ The Chinese University of Hong Kong

generalization

large language models

llms

rephrasing

prompt tuning

in-context learning

other foundations of machine learning

learning & optimization for snlp

interpretability & analysis of nlp models

other foundations of speech & natural language processing

commonsense generation

order issue

compositional instruction

memory-efficient fine-tuning

fine-tuning

7

presentations

5

number of views

SHORT BIO

Haoran Yang is a candidate postgraduate student at the Chinese Univerisity of Hong Kong, supervised by Professor Wai Lam. I am affiliated with the CUHK Text Mining Group. My research mainly focuses on Natural Language Processing (text generation, pretrained language models, parameter-efficient tuning).

Presentations

Chain-of-Dictionary Prompting Elicits Translation in Large Language Models

Hongyuan Lu and 5 other authors

A Thorough Examination of Decoding Methods in the Era of LLMs

Chufan Shi and 6 other authors

Unveiling the Generalization Power of Fine-Tuned Large Language Models

Haoran Yang and 5 other authors

Rephrasing Invokes Better Generations for Large Language Models

Haoran Yang and 2 other authors

Exploring Compositional Generalization of Large Language Models

Haoran Yang and 3 other authors

Bridging the Gap between Pre-Training and Fine-Tuning for Commonsense Generation

Haoran Yang

On the Effectiveness of Parameter-Efficient Fine-Tuning

Zihao Fu and 5 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved