
8
presentations
15
number of views
SHORT BIO
Byung-Doh Oh is a PhD candidate in computational linguistics at The Ohio State University. Prior work includes developing models of human sentence processing and unsupervised grammar induction. https://byungdoh.github.io
Presentations

Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading Times
Byung-Doh Oh and 2 other authors

Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions
Byung-Doh Oh and 1 other author

Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
Byung-Doh Oh and 1 other author

Entropy- and Distance-Based Predictors From GPT-2 Attention Patterns Predict Reading Times Over and Above GPT-2 Surprisal
Byung-Doh Oh and 1 other author

Coreference-aware Surprisal Predicts Brain Response
Evan Jaffe and 2 other authors

Coreference-aware Surprisal Predicts Brain Response
Evan Jaffe and 2 other authors

Character-based PCFG Induction for Modeling the Syntactic Acquisition of Morphologically Rich Languages
Byung-Doh Oh and 2 other authors

Surprisal Estimators for Human Reading Times Need Character Models
Byung-Doh Oh and 2 other authors