![profile picture](https://assets.underline.io/profile/187337/square_avatar/medium-6462257172690b9058b906fff1717484.jpg)
Hwaran Lee
dataset
llm
human-in-the-loop
human-machine collaboration
ethics
reasoning
adversarial examples
question answering
nlp
language model
large language models
continual learning
social bias
uncertainty quantification
benchmark
9
presentations
2
number of views
SHORT BIO
She is a research scientist at NAVER AI Lab, working on natural language processing and machine learning. Her current primary research interests are controllable language generation, dialog systems, and safety & ethics for AI. She obtained Ph.D. in Electrical Engineering at Korea Advanced Institute of Science and Technology (KAIST) in 2018, and B.S. in Mathematical Science at KAIST in 2012. Before joining NAVER AI Lab, She worked at SK T-Brain as a research scientist from 2018 to 2021.
Presentations
![](https://assets.underline.io/lecture/103583/poster_document_thumbnail_extract/medium-097bb003cad0010189d2d7c210c69a01.jpg)
KorNAT: LLM Alignment Benchmark for Korean Social Values and Common Knowledge
Jiyoung Lee and 6 other authors
![](https://assets.underline.io/lecture/103133/poster_document_thumbnail_extract/medium-83ff960721b267d807dcbbb2bd9caefc.jpg)
Calibrating Large Language Models Using Their Generations Only
Dennis Ulmer and 4 other authors
![](https://assets.underline.io/lecture/102873/poster_document_thumbnail_extract/medium-c62ba7261afccafdfb3cdfe90c147166.jpg)
TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models
Jaewoo Ahn and 6 other authors
![](https://assets.underline.io/lecture/102993/poster_document_thumbnail_extract/medium-1db2d33b5c46cbb31555a6ea16699577.jpg)
TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification
Martin Gubri and 4 other authors
![](https://assets.underline.io/lecture/101761/poster_document_thumbnail_extract/medium-bb783c4fae849681dc348005c13b4642.jpg)
Who Wrote this Code? Watermarking for Code Generation
Taehyun Lee and 7 other authors
![](https://assets.underline.io/lecture/97840/poster/medium-b8356d66a097aed6cf9e50f5c44db79f.jpg)
LifeTox: Unveiling Implicit Toxicity in Life Advice
Minbeom Kim and 5 other authors
![](https://assets.underline.io/lecture/76301/poster/medium-e55d891412580ac7688f40457a449b21.jpg)
SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created through Human-Machine Collaboration
Hwaran Lee and 12 other authors
![](https://assets.underline.io/lecture/79776/poster_document_thumbnail_extract/medium-3ccfc60fb5164a3964fc9f14900bafeb.jpg)
KoSBI: A Dataset for Mitigating Social Bias Risks Towards Safer Large Language Model Applications
Hwaran Lee and 5 other authors