profile picture

Seonghyeon Lee

language models

analysis

evaluation

question answering

classification

interpretability

keyphrase generation

graph neural networks

regularization

code generation

datasets

contextual embedding

out-of-manifold

mixup

open-ended question

5

presentations

4

number of views

SHORT BIO

I'm Ph. D. candidate in POSTECH. I'm currently working on investigating and interpreting the pre-trained language model in Data Intelligence lab with my advisor Hwanjo Yu. I have jointly worked with ScatterLab since 2021. 2. I am interested in understanding and utlizing the encoded semantic inside the pre-trained language model for practical application such as dialog system.

Presentations

Exploring Language Model’s Code Generation Ability with Auxiliary Functions

Seonghyeon Lee and 4 other authors

Learning Topology-Specific Experts for Molecular Property Prediction

Suyeon Kim and 4 other authors

Toward Interpretable Semantic Textual Similarity via Optimal Transport-based Contrastive Sentence Learning

Seonghyeon Lee and 3 other authors

OoMMix: Out-of-manifold Regularization in Contextual Embedding Space for Text Classification

Seonghyeon Lee and 2 other authors

Topic Taxonomy Expansion via Hierarchy-Aware Topic Phrase Generation

Dongha Lee and 5 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved