
3
presentations
SHORT BIO
My primary research interests lie in the intersection of natural language processing (NLP) and machine learning. I aspire to build a general-purpose NLP system that overcomes the boundaries of the human-level linguistic capabilities and disseminate knowledge worldwide. I strongly believe that my research in NLP is going to contribute to the development of such a system, and I will orient my research towards accomplishing the goal by solving untackled challenges in the field.
To be specific, I'm particularly interested in developing neuro-symbolic language models that can process symbols that possess certain characteristics (e.g., numbers), and understand those symbols in the light of the given context. These days, I'm also interested in the interaction between the parametric and non-parametric knowledge of the pre-trained LMs.
Presentations

Finer: Investigating and Enhancing Fine-Grained Visual Concept Recognition in Large Vision Language Models
Jeonghwan Kim and 1 other author

Why So Gullible? Enhancing the Robustness of Retrieval-Augmented Models against Counterfactual Noise
Giwon Hong and 4 other authors

Exploiting Numerical-Contextual Knowledge to Improve Numerical Reasoning in Question Answering
Jeonghwan Kim and 4 other authors