profile picture

Chanjun Park

Research Professor @ Korea University

machine translation

llm

large language models

multi-task learning

educational application

benchmark

zero-shot

fairness

instruction following

quality estimation

instruction understanding

efficient training

bias

contrastive learning

evaluation

18

presentations

8

number of views

SHORT BIO

Chanjun Park is a Research Professor at Korea University. Before joining Korea University, he served as a Principal Research Engineer and Technical Leader for the Large Language Models (LLMs) team at Upstage, where he contributed to building an ecosystem for LLMs. He also worked as a Research Engineer at SYSTRAN, contributing to the development of machine translation (MT) and automatic speech recognition (ASR) systems. He earned his Ph.D. at Korea University under the supervision of Professor Heuiseok Lim. He has authored over 100 publications in leading NLP conferences and journals, such as ACL, EMNLP, NAACL, EACL, and COLING. He has delivered over 70 invited talks and has significant teaching experience. Additionally, he holds more than 10 patents in the field of natural language processing (NLP). His achievements include recognition in Forbes 30 Under 30 Korea in the SCIENCE / SW field and the Naver Ph.D. Fellowship. He has been actively involved in the academic community, holding roles such as Virtual Social Chair at COLING 2022, Publication Chair for DMLR at ICLR 2024, and Program Chair for the WiNLP Workshop.

Presentations

Where am I? Large Language Models Wandering between Semantics and Structures in Long Contexts

Seonmin Koo and 4 other authors

Translation of Multifaceted Data without Re-Training of Machine Translation Systems

Hyeonseok Moon and 5 other authors

SAAS: Solving Ability Amplification Strategy for Enhanced Mathematical Reasoning in Large Language Models

Hyeonwoo Kim and 6 other authors

Search if you don't know! Knowledge-Augmented Korean Grammatical Error Correction with Large Language Models

Seonmin Koo and 3 other authors

KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models

Jaehyung Seo and 5 other authors

Length-aware Byte Pair Encoding for Mitigating Over-segmentation in Korean Machine Translation

Jungseob Lee and 8 other authors

Open Ko-LLM Leaderboard: Evaluating Large Language Models in Korean with Ko-H5 Benchmark

Chanjun Park and 7 other authors

SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up

Chanjun Park and 5 other authors

Explainable CED: A Dataset for Explainable Critical Error Detection in Machine Translation

Dahyun Jung and 3 other authors

Exploring Inherent Biases in LLMs within Korean Social Context: A Comparative Analysis of ChatGPT and GPT-4

Seungyoon Lee and 4 other authors

Hyper-BTS Dataset: Scalability and Enhanced Analysis of Back TranScription (BTS) for ASR Post-Processing

Chanjun Park and 7 other authors

Generative Interpretation: Toward Human-Like Evaluation for Educational Question-Answer Pair Generation

Hyeonseok Moon and 5 other authors

CHEF in the Language Kitchen: A Generative Data Augmentation Leveraging Korean Morpheme Ingredients

Jaehyung Seo and 5 other authors

KEBAP: Korean Error Explainable Benchmark Dataset for ASR and Post-processing | VIDEO

Seonmin Koo and 6 other authors

Informative Evidence-guided Prompt-based Fine-tuning for English-Korean Critical Error Detection

Dahyun Jung and 5 other authors

PicTalky: Augmentative and Alternative Communication for Language Developmental Disabilities

Chanjun Park

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved