profile picture

Cheng-Han Chiang

PhD Student @ National Taiwan University, Taiwan

llm

automatic evaluation

evaluation

large language model

llms

large language models

multi-hop reasoning

representation learning

sentence embedding

semantic textual similarity

retrieval-augmented generation

human evaluation

sentence encoders

chain-of-thought

synonym substitution attacks

11

presentations

3

number of views

SHORT BIO

I am a second-year PhD student at National Taiwan University (NTU) in Taipei, Taiwan. I am a member of Speech Processing and Machine Learning (SPML) Lab. I am advised by Prof. Hung-yi Lee.

My main research interest is natural language processing, especially self-supervised learning and pre-trained language models. I started my research from the BERT era and I investigated why BERT works so well on downstream tasks. In the LLM era, I still focus on pre-trained language models, including how to use those LLMs in diverse scenarios and augment LLMs with retrieval. I am also interested in evaluating diverse tasks and how to assess an ML system reliably.

I am looking for intern position in the summer of 2024.

Presentations

Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course

Cheng-Han Chiang and 4 other authors

Advancing Large Language Models to Capture Varied Speaking Styles and Respond Properly in Spoken Conversations

Guan-Ting Lin and 2 other authors

Merging Facts, Crafting Fallacies: Evaluating the Contradictory Nature of Aggregated Factual Claims in Long-Form Generations

Cheng-Han Chiang and 1 other author

Over-Reasoning and Redundant Calculation of Large Language Models

Cheng-Han Chiang and 1 other author

A Closer Look into Using Large Language Models for Automatic Evaluation

Cheng-Han Chiang and 1 other author

A Closer Look into Using Large Language Models for Automatic Evaluation

Cheng-Han Chiang and 1 other author

Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems

Cheng-Han Chiang

Can Large Language Models Be an Alternative to Human Evaluations?

Cheng-Han Chiang and 1 other author

Are Synonym Substitution Attacks Really Synonym Substitution Attacks?

Cheng-Han Chiang and 1 other author

Recent Advances in Pre-trained Language Models: Why Do They Work and How Do They Work

Cheng-Han Chiang

On the Transferability of Pre-Trained Language Models: A Study from Artificial Datasets

Cheng-Han Chiang and 1 other author

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved