profile picture

Mohit Bansal

UNC Chapel Hill

summarization

llms

compositional generalization

data augmentation

interpretability

commonsense reasoning

code generation

language generation

benchmark

continual learning

conversation

factuality

multi-document summarization

large language models

dataset

91

presentations

116

number of views

2

citations

SHORT BIO

Dr. Mohit Bansal is the John R. & Louise S. Parker Professor and the Director of the MURGe-Lab (UNC-NLP Group) in the Computer Science department at UNC Chapel Hill. He received his PhD from UC Berkeley in 2013 and his BTech from IIT Kanpur in 2008. His research expertise is in natural language processing and multimodal machine learning, with a particular focus on multimodal generative models, grounded and embodied semantics, language generation and Q&A/dialogue, and interpretable and generalizable deep learning. He is a recipient of IIT Kanpur Young Alumnus Award, DARPA Director's Fellowship, NSF CAREER Award, Google Focused Research Award, Microsoft Investigator Fellowship, Army Young Investigator Award (YIP), DARPA Young Faculty Award (YFA), and outstanding paper awards at ACL, CVPR, EACL, COLING, and CoNLL. He has been a keynote speaker for the AACL 2023 and INLG 2022 conferences. His service includes ACL Executive Committee, ACM Doctoral Dissertation Award Committee, CoNLL Program Co-Chair, ACL Americas Sponsorship Co-Chair, and Associate/Action Editor for TACL, CL, IEEE/ACM TASLP, and CSL journals. Webpage: https://www.cs.unc.edu/~mbansal/

Presentations

On Positional Bias of Faithfulness for Long-form Summarization

David Wan and 3 other authors

AdaCAD: Adaptively Decoding to Balance Conflicts between Contextual and Parametric Knowledge

Han Wang and 3 other authors

MAMM-Refine: A Recipe for Improving Faithfulness in Generation with Multi-Agent Collaboration

David Wan and 3 other authors

Teaching Models to Balance Resisting and Accepting Persuasion

Elias Stengel-Eskin and 2 other authors

Reverse Thinking Makes LLMs Stronger Reasoners

Justin Chen and 10 other authors

A Simple LLM Framework for Long-Range Video Question-Answering

Ce Zhang and 6 other authors

Knowledge-Aware Reasoning over Multimodal Semi-structured Tables

Suyash Vardhan Mathur and 7 other authors

Explaining and Improving Contrastive Decoding by Extrapolating the Probabilities of a Huge and Hypothetical LM

Haw-Shiuan Chang and 4 other authors

LLM Self-Correction with DeCRIM: Decompose, Critique, and Refine for Enhanced Following of Instructions with Multiple Constraints

Thomas Palmeira Ferraz and 9 other authors

Opening Session

Yaser Al-Onaizan and 3 other authors

The Unreasonable Effectiveness of Easy Training Data for Hard Tasks

Peter Hase and 3 other authors

Soft Self-Consistency Improves Language Models Agents

Han Wang and 3 other authors

Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings

Yichen Jiang and 2 other authors

The Power of Summary-Source Alignments

Ori Ernst and 7 other authors

ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs

Justin Chen and 2 other authors

Evaluating Very Long-Term Conversational Memory of LLM Agents

Adyasha Maharana and 5 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved