profile picture

Byung-Doh Oh

Graduate student @ The Ohio State University

cognitive modeling

large language models

surprisal theory

sentence processing

interpretability

human reading times

self-attention mechanism

attribution method

memory-based effects

8

presentations

15

number of views

SHORT BIO

Byung-Doh Oh is a PhD candidate in computational linguistics at The Ohio State University. Prior work includes developing models of human sentence processing and unsupervised grammar induction. https://byungdoh.github.io

Presentations

Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading Times

Byung-Doh Oh and 2 other authors

Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions

Byung-Doh Oh and 1 other author

Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?

Byung-Doh Oh and 1 other author

Entropy- and Distance-Based Predictors From GPT-2 Attention Patterns Predict Reading Times Over and Above GPT-2 Surprisal

Byung-Doh Oh and 1 other author

Coreference-aware Surprisal Predicts Brain Response

Evan Jaffe and 2 other authors

Coreference-aware Surprisal Predicts Brain Response

Evan Jaffe and 2 other authors

Character-based PCFG Induction for Modeling the Syntactic Acquisition of Morphologically Rich Languages

Byung-Doh Oh and 2 other authors

Surprisal Estimators for Human Reading Times Need Character Models

Byung-Doh Oh and 2 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved