
Heng Ji
Professor @ University of Illinois at Urbana-Champaign
information extraction
contrastive learning
large language model
fact-checking
event extraction
large language models
relation extraction
personalization
cross-lingual transfer
social media
computational social science
pre-training
dataset
misinformation
sentiment analysis
87
presentations
173
number of views
1
citations
SHORT BIO
Heng Ji is a professor at Computer Science Department, and an affiliated faculty member at Electrical and Computer Engineering Department and Coordinated Science Laboratory of University of Illinois Urbana-Champaign. She is an Amazon Scholar. She is the Founding Director of Amazon-Illinois Center on AI for Interactive Conversational Experiences (AICE). She received her B.A. and M. A. in Computational Linguistics from Tsinghua University, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing, especially on Multimedia Multilingual Information Extraction, Knowledge-enhanced Large Language Models and Vision-Language Models. She was selected as a "Young Scientist" by the World Laureates Association in 2023 and 2024. She was selected as "Young Scientist" and a member of the Global Future Council on the Future of Computing by the World Economic Forum in 2016 and 2017. She was named as part of Women Leaders of Conversational AI (Class of 2023) by Project Voice. The other awards she received include two Outstanding Paper Awards at NAACL2024, "AI's 10 to Watch" Award by IEEE Intelligent Systems in 2013, NSF CAREER award in 2009, PACLIC2012 Best paper runner-up, "Best of ICDM2013" paper award, "Best of SDM2013" paper award, ACL2018 Best Demo paper nomination, ACL2020 Best Demo Paper Award, NAACL2021 Best Demo Paper Award, Google Research Award in 2009 and 2014, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She was invited to testify to the U.S. House Cybersecurity, Data Analytics, & IT Committee as an AI expert in 2023. She was selected to participate in DARPA AI Forward in 2023. She was invited by the Secretary of the U.S. Air Force and AFRL to join Air Force Data Analytics Expert Panel to inform the Air Force Strategy 2030, and invited to speak at the Federal Information Integrity R&D Interagency Working Group (IIRD IWG) briefing in 2023. She is the lead of many multi-institution projects and tasks, including the U.S. ARL projects on information fusion and knowledge networks construction, DARPA ECOLE MIRACLE team, DARPA KAIROS RESIN team and DARPA DEFT Tinker Bell team. She has coordinated the NIST TAC Knowledge Base Population task 2010-2020. She was the associate editor for IEEE/ACM Transaction on Audio, Speech, and Language Processing, and served as the Program Committee Co-Chair of many conferences including NAACL-HLT2018 and AACL-IJCNLP2022. She was elected as the North American Chapter of the Association for Computational Linguistics (NAACL) secretary 2020-2023. Her research has been widely supported by the U.S. government agencies (DARPA, NSF, DoE, ARL, IARPA, AFRL, DHS) and industry (Amazon, Google, Bosch, IBM, Disney).
Presentations

Enriching Conceptual Knowledge in Language Models through Metaphorical Reference Explanation
Heng Ji and 1 other author

Information Association for Language Model Updating by Mitigating LM-Logical Discrepancy
Pengfei Yu and 1 other author

Training-free Deep Concept Injection Enables Language Models for Video Question Answering
Xudong Lin and 4 other authors

Why Does New Knowledge Create Messy Ripple Effects in LLMs?
Jiaxin Qin and 5 other authors

Panel: Increasing significance of NLP in the age of Large Language Models (LLMs)
Monojit Choudhury and 4 other authors

Mitigating the Alignment Tax of RLHF
Yong Lin and 15 other authors

EVEDIT: Event-based Knowledge Editing for Deterministic Knowledge Propagation
Jiateng Liu and 7 other authors

Finer: Investigating and Enhancing Fine-Grained Visual Concept Recognition in Large Vision Language Models
Jeonghwan Kim and 1 other author

Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning
Kung-Hsiang Huang and 7 other authors

SciMON: Scientific Inspiration Machines Optimized for Novelty
Qingyun Wang and 3 other authors

ActionIE: Action Extraction from Scientific Literature with Programming Languages
Xianrui Zhong and 8 other authors

R-Tuning: Instructing Large Language Models to Say ‘I Don’t Know’
Hanning Zhang and 8 other authors

LETI: Learning to Generate from Textual Interactions
Xingyao Wang and 3 other authors

LM-Infinite: Zero-Shot Extreme Length Generalization for Large Language Models
Chi Han and 6 other authors

Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models
Yangyi Chen and 4 other authors

Named Entity Recognition Under Domain Shift via Metric Learning for Life Sciences
Hongyi Liu and 3 other authors